Donnerstag, 14. Dezember 2017

IT-Tage 2017 in Frankfurt

The last conference has been a while but finally I managed to get myself signed up. This is my first time at the IT-Tage in Frankfurt which is a conference hosted by the German magazine Informatik Aktuell.

Three days with 6 parallel tracks seemed like a lot to choose from even though 3 tracks mainly revolve around sql database technologies. Although those are not of that much interest to me I was still looking forward to this as on the other hand there are lots of talks about micro services, security and programming languages.

But before I go into detail about the talks, some general information. The venue is the Kap Europa Center in the heart of Frankfurt, easy to reach and rather classy looking. What strikes me as a bit odd is that all talks are held in German even though many use English slides. This seems to be intentional by the organizers and sure has some advantages but I think it should not be an issue for anyone working in IT to follow English talks and it would broaden the potential audience as well as pool of speakers.

A positive thing on the other hand is the concept of track hosts. Actually all conferences have someone being responsible for the hardware to work properly but here those people do quite some more. They give a quick introduction about the speaker and the topic, encourage discussion afterwards and give additional information about the following program.

Day 1

Harald Gröger - General Data Protection Regulation

This new regulation will become effective in May 2018 and does have some serious effects on how companies have to handle all aspects of personal data, gathering, processing and even deletion.

I will not repeat every detail here but the key changes that got stuck in my mind are:

you will have to explicitly ask for permission to gather any data
the user has to actively give his permission
it is not obvious what kind of information has to be considered being personal
if a user requests deletion of his information, that will have to happen immediately but only if the data is not needed for other purposes like order processing/billing or legal matters
Especially the last point will cause us developers serious headaches to handle all corner cases without breaking anything and not violating the law.

Harald is working with IBM and also lectures at the University of Würzburg about related topics. So he knows his way around data protection and you noticed it throughout the solid and entertaining talk. TBH I would have enjoyed it more if the topic would have been more fun but it was still a good start for the day.

Domink Schadow - Build Security

Since the first session was about as theoretical as it can get the second one had to be "nerd on". So I went into this talk about build security hoping for some hands on stuff that I might actually be able to use back at the office.

So what do we mean by "build security"? Your build process should verify that your code and the resulting application is as secure as it needs to be. Because that is what we all want, a working application that can't be hacked by script kiddies within 5 minutes ;-)

To help us developers achieve this our build pipeline should perform certain analysis steps. It should perform them as often as possible/feasible and as soon as possible.

The easiest approach to this is the well known static code analysis. A widely used tool here is FindBugs, which I consider pretty awesome. But FindBugs is not really suitable for security checks, it does have checks for some injections and programming errors but not in depth security analysis. Or does it...? There is a plugin to FindBugs called FindSecurityBugs or FindSecBugs or SpotBugs which does exactly that, scan explicitly for security issues. In combination with SonarQube this can be a really useful and powerful tool.

But what apart from that? What should we even check for? To have a starting point everyone should at least know of the OWASP checklist showing the top security issues and the recommended counter measurements. To support working with that there are two OWASP tools.

The first one is OWASP dependency check, this tools checks the dependencies you use to build and run your application and checks them for known vulnerabilities. Telling you which libraries have security flaws and to which version you should updated etc. Brilliant!

The second one is OWASP ZAP. This is a powerful tool and a large portion of the session was spent on showing how you can configure it and what is important to consider. For my taste it could have been some fewer jenkins config screens (it has a jenkins plugin, yeah) but it was still good to hear all of the pitfalls etc.

ZAP can be used as proxy between your app and the clients to manipulate and inspect requests and data. That is not that new. But it can also execute attacks on your application based on known real world security issues with actual working exploits. You can configure it on what endpoints it should attack or you can let it record your requests and then replay those in an altered way in the attack mode. If you think of using it, be sure to read the documentation very very carefully as this tool can easily break your neck if you screw up.

But as great as those tools are, they also have downsides:
each tool increased the build time, some only a bit like FindSecBugs (only seconds or maybe minutes) or a really really really shit load like ZAP which can cause a build of a few minutes to then take hours due to the amount of requests performed
sadly all tools are known to report false-positives, some more, some less. But there are configuration options to limit those to keep the pain rather low
Even though it was not exactly as I expected I think it was a pretty good talk from a tech guy showing us what can be done to improve the security of our software with tools available for everyone.

Keynote Pilipp Sandner - Blockchain and IoT

In this keynote we got a glimpse of what changes block chains, the concept behind things like bitcoin, might enable in the future.

Basically a block chain creates sequential pieces of information where every piece has data from it's predecessor encoded into it thus making the sure no element of a chain can be manipulated unnoticed.

Block chains would allow easy transactions not only between humans but also between humans and machines or even between machine, like a car transferring some kind of currency to a parking meter etc.

Most of the use cases shown were not that new and could have been implemented already without block chains. But what block chains add is the possibility to ingrain checks, processes and restrictions but also convenience functions into the them. That allows to setup easy ecosystems for things like smart contracts or IoT chains that users can hook up to easily.

Of course the note also covered the current state of the so called crypto currencies like bitcoin. I was surprised how many other crypto currencies are out there and how much bitcoin has been skyrocketing in the last months. But it is also scary that no one really knows if this will carry on or if there is a bubble just waiting to burst.

Going further on this subject the basic question is, does value always have to be couple to a physical good? I don't know and I don't think anyone else really does, so only time can tell.

As before it was clear the speaker knew about his subject and he was able to transport his enthusiasm to the audience.

Christoph Iserlohn - Security in Micro Service Environments

Aaaand again going from theory to real world hand on stuff, at least I thought I was.

The first 15 minutes were spent on defining what a micro service is and then about how it is difficult to get security know how into your teams and then into your development life cycle. Everything true and important in it self but I still fail to see why we needed it in this kind of depth, but ok.

In a world where most people are going from monolith to micro services we often forget that we have to secure our services. Sometimes that is considered done after slapping some OAuth on them but that is not enough.

By splitting your application up into several services you also open up new attack vectors. Services are talking over the network so you have to secure the network, make sure only authenticated clients are talking to your services and that a requests that is passed to other services has all credentials and context information it needs to verify that the request is valid.

Next was a short excursus about the history and relations of OAuth and OpenID, followed by how security tests should be part of the build pipeline. What I took from this part was, that the testing tools seem to be lacking and that you need to build a lot of what you need yourself.

The last part focused on secret management. When going all in on micro services architecture you have a lot of secrets to maintain, deploy and revoke like passwords, certificates, API keys and shared secrets. The recommendation here was to use Vault and if possible an HSM (Hardware Security Module) and services to have secrets change dynamically.

All in all I got the impression the speaker was rather gloom about this subject. Everything had the vibe of "this is not working and that is not working, we had problems here and problems there". This was emphasized by the fact that there were no real solutions presented only notions of what issues they ran into but not how they solved it.

Of course micro services are no silver bullet and sometimes it is better to build larger systems, but are they really that bad? I doubt it..

I think this was a real pity because I had the impression Christoph has a lot of know how in that area that he could share.

Lars Röwekamp - Serverless Architecture

Serverless is one of the latest buzz words (at least i think so) stirring around in the IT community. But what do people actually mean when they talk about serverless?

Lars started with a good definition "If nobody uses it, you don't have to pay for it" or the more widely known quote "don't pay idle". So is my heroku dyno serverless? ;-) Not quite as shown by the serverless manifesto. I am not going to write all of that down, you can look it up. I am just going over the most important things Lars pointed out.

In serverless we are talking about functions as deployment units (Faas), so a Unit much much smaller than your regular micro services, something that really only does one thing. Functions react to events, do something and by that trigger other events that trigger other functions. Well, this sounds like NodeJS in the cloud to me :-)

The idea is that functions are scaled on a request basis, so every time a new handler is needed it is started up and shut down when it is not needed anymore. The reality seems not to be quite there yet, but we might be getting there eventually.

What is also an interesting idea is that the developer does not care about the container the function will be running in. He just writes his code and the rest is done by the build and deployment processes.

This went on for a while as Lars covered the whole manifest and I felt like he gave a very objective overview of the topic with the occasional reality check thrown in, showing that theory and practice often differ.

After that we had a look at how such a function is build. We are actually not deploying a function but a lambda. A lambda consists of a few things. The actual code, that's what people usually mean when talking about the function. The function receives a context object and an event object. The event tells the function what has happened and the context gives information about the available resources. Additionally you should also always define security roles to make sure your function is only called by those who are actually allowed to do it and to also make sure your function only does what you want it to do. You would not to span e.g. EC2 instances when you don't need them etc.

On a first glance serverless sounds like awesome, because you hardly pay anything, right? Well, not quite. What you have to consider is that with serverless a lot of your development and testing also has to happen in the cloud. You can do tests locally but in the end you always have to check if it really runs as expected in the cloud environment.

Then followed a few examples of possible scenarios which I think you should best lookup on the slides.

And finally the obligatory pitfalls section. Again this was presented very objective showing that of course the complexity of applications cannot be simply removed by a new architectural pattern. All issues people have with micro services also apply to serverless and are on some cases even amplified due to the even higher degree of distribution and smaller granularity. A special mention has to go to the dreaded vendor lock, if you go to serverless your code will have dependencies to the vendor you use. But usually doing everything yourself the vendor does for you is magnitudes more work then changing those dependencies should you decide to change vendors.

For the rest I also recommend you check out the slides. As a summary I want to say that Lars so far gave the best talk on this conference and it really shows he knows what he is talking about.

Werner Eberling - My Name is Nobody – the Architect in Scrum

This will be a short summary. Basically this was an experience report of how a single architect has to fail in an environment with several scrum teams working on a single huge project.

I try to sum it up: If the architect is outside of the teams has no team support, his decisions will not be backed the developers and he does not have enough knowledge of the team domains to understand the issues a team might have with his design. If you make that one architect part of all teams he will have absolutely no time to tend to the individual teams he will be in meetings all the time and soon burn out. So the only and imo obvious solution is: you need architecture know how in all teams, the teams have to design the architecture themselves but to maintain the big picture all cross team concerns have to be discussed in a community of practice.

It seems that the scrum masters just tried to followed what they considered textbook instead of simply thinking what is needed to make the project work, which is what imo is the core idea of scrum. Identify issues and then work as a team to remove them. And somebody should have noticed that there is something weird when someone gets appointed or hired as architect but the scrum masters claim that this role does not exist and he is not needed. At least one side has to be wrong in that argument.

I might be in beneficial situation as this is how we worked for years and it struck me as odd that there are still organizations that do not understand how to handle this. It is nothing new and anyone with at least some brains should see that the single architect approach is prone to fail or at least notice that after a year.

But so far this report does not do justice to Werner. While I did not take much from the talk his presentation was entertaining and solid. So if he decides to give another talk about a more technical topic I will try to join.

Guido Schmutz - Micro Services with Kafka Ecosystem

We start off with a definition of micro services, again ;-)

But after that Guido got real on how to build software with an event bus system like Kafka even though the first part focused on the concepts without taking Kafka into consideration.

Even though this was very interesting the summary is rather short. There seem to be three best practices for event bus systems:

CQRS (Command Query Responsibility Segregation) 

There is also an article by Martin Fowler about this topic, so I try to keep it short: To allow for a large amount of transactions and responsive systems it is advisable to have read and write operations on your data separated, e.g. you have one service writing data to a data store (e.g. an event store) but have read operations performed by a different service working on a view of that data store. This view can have a completely different format and technology underneath. All you have to do is propagate write changes to the original data store to the view.

Event Sourcing: State is saved as a chain of events

The New York Times published an article how they are using this technique to save all their published data. All editorial changes are saved as an event in an event store in the order they occurred. So a client can simply resume from the last event it had seen without the bus system having to keep track of each client.

One Source of Truth

Only ever ever ever write into one data store, all other stores must be populated from that main data store. Otherwise in case of an error you will run into consistency issues as distributed transaction are not (well) supported (yet) and also have performance impact.

To round it up Guido gave a run down on the reasons why he used Kafka for this kind of systems. The main one being that Kafka is a slim implementation focusing only on this one purpose.

For me this was in interesting overview on event bus architectures and an impulse to look into them some more.

D. Bornkessel & C. Iserlohn: Quick Introduction to Go

This was exactly what I expected, a run down on where Go came from, what the design criteria where and a brief feature overview followed by a small tutorial on the basic programming tasks.

Here a short list of the topics I found most interesting:

  • Dependency Management: This seems to be rather broken/bad/cumbersome which would be a big issue for me but on the other hand most things are already shipped with Go, so maybe it is not that bad?
  • No package hierarchies: ok.. well if you don't need a lot of dependencies then maybe that works.. 
  • Compile breaks on unused variables: On the one hand this is good for finding bugs etc. but during development this could cause some frustration. Maybe you get used it..
  • Functions can return tuples: Yeah, that can be of great help in some cases
  • No Exceptions: Instead you have to check if the called function returned an error object together with the return value. So that is why the made it return tupels, eh? Causes a lot of if checks.. *yuck*..
  • Casting also returns error objects like a function call
  • Implicit Interfaces: You do not declare to implement an interface you just implement the methods of the interface and then your class implicitly is also of that interface type.. TBH I don't know yet if I think this is cool or horrible.. Something about this makes my skin itch.

Christian Schneider - Pen Testing using Open Source Tools

The final evening session, lasting two hours. Again security :-)

The first part was an exhaustive overview of the available tools you can use for different purposes

Web Server Fingerprinting & Scanning

  • Nikto: Scans a webserver for typical issues like default applications still deployed, server state pages, missing headers etc
  • Does exactly that, checks your SSL configuration about supported ciphers etc
  • OWASP O-Saft: An alternative to testssl 
  • OWASP ZAP: This part took up most of the time which was good because the tool is very powerful but for me held not much new information because of Dominik Schadow's talk earlier. But for those who do not remember, ZAP is an intercepting proxy which can also run complete and exhaustive attack scenarios on your application.
  • Arachni: Allows to spider Javascript applications better than the other tools
  • sqlmap: Database take over tool. Man that looked impressive AND scary. This tool performs a massive load of very deep injection tests on a single request trying to get into your database and cause damage or extract information. The results are then available in a separate sql shell/browser for inspection.

OS Scanning

  • lynis: Checks your server for available root explotations like root crontabs being world writeable or dangerous sticky flags etc and necessary updates if run as root
  • linuxprivchecker: shows which root exploits exist in your kernel and libs

Whitebox Analysis

  • FindSecBugs/SpotBugs: Has also been handled in the talk of Dominik Schadow, so nothing new here for us
  • ESLint with ScanJS: Allows scanning of Javascript for security bugs
  • Brakeman: Same for Ruby
  • OWASP Dependency Check: Also same as before ;-)

Then we got a live demo on Christian's Maschine showing us how ZAP behaves in attack mode, what the output looks like and how to tweak the configuration.

We also saw the XEE vulnerability being exploited to read the passwd file of the server. Even though this is nothing new from a theoretical point of few it is something different seeing it actually happen.

This was followed by a discussion until we finally decided to call it a day... ufff....

Day 2

Thorsten Maier & Falk Sippach - Agile Architecture

TBH I did expect a different kind of talk here, something more along the lines how a flexible and agile architecture could look like to suit changing needs. But that is something that probably does not even exist, so I should not have been surprised that it was actually about how to create and maintain an architecture in a dynamic environment ;-)

The basic problem Thorsten and Falk try to approach is that a system architecture is usually something very fundamental that can't be changed easily afterwards while an agile environment often demands changes late in the project.

What the presented were 12 different tasks an architect or rather the team should perform in cyclic iterations to see if the architecture still fits the project needs.

This cycle consists of 4 phases, design, implement, evaluate, improve. Much like the typical scrum process. The details can be seen on the slides I'll just summarize the main points:
  • gather the actual requirements not just abstract wishes, make sure you know what your stakeholders need and who you are dealing with
  • try to compare different architecture approaches, e.g. using ATAM or other metrics
  • document your architecture (in words and graphics) and explain it to other people to get early feedback and sanity checks
  • try to have documentation generated during the build, maybe have automatic code checks to find violations (not really sure about that tbh, that is what code reviews are for imo)
  • check if the requirements have changed and if the architecture still suits the needs
This sounds all very well and it sure can help to come up with a good architecture in the start and enable your team to stick to it. But still the problem persists that if the requirements change drastically in a late phase you will have the wrong architecture. So I don't really get how that addresses the problem at hand.

What I take from the talk ist that it might be worth to check out arc42 and PlantUML for automated architecture documentation. 

Raphael  Knecht & Rouven Röhrig - App Developement with React Native, React and Redux

Well, what can I say. I have no clue about React or Redux and so this talk did contain a LOT of new information for me and I think I am still trying to digest all of it.

Raphael and Rouven gave an overview of what React, React-Native and Redux are, what you can do with it and what their experiences were. First of, with all of those tools you write Javascript or JSX which claims to be a statically typed Javascript. React is a view framework so it handles only rendering. React-Native compiles or transpiles or whateverpiles the Javascript (or JSX) code into native code for mobile devices. Redux handles everything apart from the view, it provides a state, that can be changed by using actions and reducers, where the action states what should be done and the reducer how it should be done. 

For details on how this is coupled I have again to refer you to the slides as the diagrams explain this better than I could. An important notion is that a reducer is a pure function so there are no side effects in those. But for some aspects you need side effects, like logging e.g. To handle those concerns React uses something called middleware.

There are two different middleware implementations (at least in this talk), Redux-Thunk and Redux-Saga, both with their own pros and cons. The speakers did end up using Saga as it seems to be more flexible in handling complex requirements, where as Thunk tends to end up with large actions that are not maintainable.

Reasons why React/React-Native/Redux is cool:
  • good tool support for dev and built
  • hot and live reload in the ide
  • remote debugging
  • powerful inspector
Reasons why React/React-Native/Redux is sometimes not so cool:
  • not everything can be done in JS/JSX, sometimes you need native code
  • currently version 0.51.0 and there can be breaking changes
  • sometimes not all dependencies are available in compatible versions due to different release cycles
Maybe, when I have time, I can try making an app with React. *dreams on*

Dragan Zuvic - Kotlin in Production: Integration into the Java Landscape

tl;dr - I think that was a great talk. I like kotlin, at least that what I have seen so far and it was good to have someone explain not only the basic but also the advanced features but also issues of this language.

I would say Dragan is a kotlin fanboy, he is very enthusiastic about the language features and the tool support, which indeed is really good. We started with the basic overview of the language features:
  • type inference
  • nullable types
  • extension functions
  • lambdas
  • compact syntax
    • no semi colons
    • it keyword in lambdas
    • null safety operator
    • elvis operator
To only name a few. What was also interesting to me is that by default kotlin code is compiled to JRE 6 byte code but you can still you use fancy new stuff like lambdas. Alternatively you can also compile to JRE 9 code with module support or even to native code.. wooot....

Most of the talk revolved around interoperability between kotlin and Java. To make it short, it works pretty well. Calling Java from kotlin seems to be no problem at all. The other way round has a few issues but usually is also not much of a problem.

Falk Sippach - Master Legacy Code in 5 Easy Steps

Live coding demo on what techniques you can use when dealing with legacy applications. There are already some books about this but most of them assume that you a) the legacy code is testable and b) you already have tests.

That is not always the case. So what do you do when you have code that can't be tested and that also does not have tests (which is not that unlikely since it is not testable.. d'oh). 

Before you actually start getting to work you need to make yourself aware of two necessities:
  1. only perform very small isolated steps, so that cannot break to much and can stop at any time
  2. do not start fixing bugs on the way, you do not know if those bugs are accepted behavior for clients or maybe they aren't bugs after all when you see the big picture - preserve the current behavior. 
The following techniques can help you to get control over your legacy code:
  • Golden Master: Even if you do not have tests you can usually in some record the system behavior for a certain event. That can be log output or the behavior of your UI when you enter certain data. You can record this behavior, make small changes and then perform the same action again to see if the behavior is still the same. This is no guarantee that you did not break anything but it is an indication that you did not break everything. So if your steps are small enough this can be a good way to change your code into testability.
  • Subclass to test: Some classes cannot be tested because they have certain dependencies that make tests hard or unpredictable. You can try to subclass those classes and then overwrite methods relying on those dependencies to return a static value allowing you to test the rest of the class.
  • Extract pure functions: Split your code up so that you can isolate pure function where possible, making your code better readable.
  • Remove duplication: That is usually good to remove lines of code.
  • Extract class: Similar to extract pure functions but on a larger scale. That way you can create cohesion by extracting small logic parts into separate classes.

N. Orschel & F. Bader: Mobile DevOps – From Idea to App-Store

This was also not quite what I was hoping for. The part that would have been interesting to me was rather short in this talk. How to handle the different build requirements for all platforms, what challenges are there to have the app rolled out to the different stores etc.

The main part of this talk revolved around tool selections like "do we use centralized or decentralized version control". Yes, that was an important for the guys in the project but in the end does not matter much for the way the app is built and deployed, not even much for development. Code reviews can be done with any VCS it is sometimes just more difficult. 

Anyways, there still have been some relevant infos for me, regarding on how to test the different applications.

To build and test the iOS app you still need a mac but you don't want to buy Macs for every developer. The solution shown here looks promising. Simply buy a Mac Mini with the maximum hardware profile and then install Parallels on it, that way you can have lots of virtual Macs for your dev and test environments.

With Android you have the issue that there are thousands of different devices out there and can't buy one of each just to test your app. Solutions for this can be found at Xamarin Testcloud or Perfecto Mobile.

And finally Appium provides an abstraction for test automation.

Franziska Dessart: Transclusion – Kit for Properly Structured Web Applications

I literally had NO clue what transclusion means. And the subtitle was also not that elusive to me. So what is it? A new framework? A weird forgotten technique? Well.. not really ;-)

Transclusion means that a resource can include other resources via links and those resources will then be embedded into the main resource.

Sounds familiar? Here a few examples of transclusions:

  • iframe
  • loading web page parts via ajax
  • SSI (Server Side Includes)
  • ESI (Edge Side Includes)
  • External XML Entities (remember the XEE exploit from yesterday? ;-))
The term is not new, it is just not that widely known. But with more and more micro service architectures it becomes more important.

Most of this talk targeted the different places a transclusion can occur and what considerations have to be made. Mainly you want to assemble the primary content of a website before the user sees it, so if you have to use transclusion for you do not want to do it in the client. Secondary content on the other hand is fine to be transcluded on client side. 

With every tranclusion choice you will have to take some aspects into account:
  • should the transcluded resource contain styling or scripting?
  • do you need/want to resolve transclusions recursively and if so for how many levels?
  • do you cache? if so the final resource or only certain transclusions or only the main resource?
All of those questions have to be answered separately for each use case. 

Thorsten Maier: Resilient Software Design Pattern

Resilient software has always been important. Micro services hold new challenges in this area though. 

If we want a high availability for our software we can try to increase the time until a failure will occur and we can also try to reduce the time for recovery in case of a failure. Ideal would be a system where a failure is not noticeable from the outside because the system can recover instantly, or at least seem to do so.

Thorsten not only described the well known patterns for these goals but gave specific implementation examples for most of them. e.g. using Eureka (from Netflix) as a service registry, have client side load balancing with Ribbon or Zuul as API-Gateway.

This was a nice change because usually those talks only cover the theoretical concepts but do not show you what tools you could use. But to be fair, usually the implementation was done by using an annotation within a Spring Boot application, so it was rather easy to show ;-)

Day 3

Mario-Leander Reimer: Versatile In Memory Computing using Apache Ignite and Kubernetes

Starting the day with some nerding, oh yeah!

Mario-Leander gave a great overview of the impressive capabilities Apache Ignite has. It seems to try to handle everything any middleware has ever done. While I am not a big fan of the "ultimate tool" pattern it does have some really interesting features.

Mario-Leander started of with his reasons why he started using Ignite. Micro services should not have state but applications usually do have state. So this state has to be put somewhere usually that is some shared database somewhere. When your landscape grows this can become a bottleneck or in some cases you needed to access your data in different ways. This is where Ignite can help.

Ignite provides you with distributed, ACID compliant key value store that is also accessible via SQL or even Lucene queries. That alone is quite a lot but the list of features goes on for some times including messaging, streaming and the service grid. The messaging component provides queues and topics just as JMS would do.

The service grid allows your software to deploy IgniteServices into your cluster, an IgniteService is some piece of code that gets distributed to your nodes. This service usually needs to access portions of the data, but depending on the cache mechanism not all data is available on all nodes. Ignite supports collated processing which means that processing takes place on the nodes where the data is to reduce data transfer to a minimum.

The streaming component was shown with a live demo where Mario-Leander attached his Ignite to Twitter querying for the conference hash tag and printing the tweet data to standard out. For most features the slides provide code examples that show a concise API.

The talk itself was very good and Mario-Leander was able to answer most questions in depth showing his experience with the topic. Was definitely worth it.

Andre Krämer: Exhausting CPU and I/O with Pipelines and Async

Uhh.. more nerding!!! Or not ;-) From the topic I was expecting some advance techniques for asynchronous programming or maybe some project report about what problems have been tackles by async paradigms. Actually it was more an introduction to async and why async is faster than sync.

The basics were that you should separate I/O and CPU intensive tasks in a consumer/producer pipeline so that the separate workers can be scaled independently to get the most our of your system.

Then we had a part about performance measuring I/O performance on a machine and what pitfalls can lurk there. There were some interesting aspects but focused a bit much on NTFS specialties which simply are not relevant to me. The conclusion was that you should be sure to measure the right thing if you want reliable results, e.g. small files are often cached in RAM and not on disc.

Christoph Iserlohn: Monolithen – Better than their Reputation

This topic sounds pretty controversial with the current micro services hype. 

A slow or cumbersome development cycles quite often is not the caused by the software but by the organizational structure. When different departments with different agendas are involved (e.g. Devs have different goals than Ops or DBAs - feature vs stability) then you will run into problems that you will not be able to solve with micro services.

On the other hand a monolith must not be bad per se. You can still structure your monolith into separate modules to mitigate most of the well known disadvantages of monoliths. There are even some aspects in which the monolith is superior to micro services:

  • Debugging: It is very hard to debug an application consisting of 10s or 100s of different micro services and requires a lot of work before hand to be actually possible
  • Refactoring across module boundaries: In a monolith you (usually) have all affected components in one place and can spot errors quite early. In a micro service landscape it can be quite hard to make sure all clients are adjusted to or can cope with your changes
  • Security: One micro services is more secure than a large monolith but if you consider the whole micro service application this is not necessarily the case as you have to take the communication paths into account which are often quite hard to secure
  • UI: With micro services a good UI is usually hard to build if the data comes from different services
  • Homogenous Technology: This can be an advantage as it reduces the complexity of the system and requires less skill diversity but also a disadvantage as you can't choose a different technology for a module
The bottom line is that monoliths or micro services are neither simply good or bad, it all comes down to your requirements and external parameters. So think before you build!

Nicholas Dille - Continuous Delivery for Infrastructure Services in Container Environments

This talk focused strongly on the Ops perspective of micro service platforms and, like many others, started off with the basic reasons and advantages of automation for software deployments, like reproducibility, ease of deployment and stability.

To achieve that the automation must be easy to use, well tested and standardized. That is why everything as code is so important because it allows us to create exactly that. According to Nicholas this is where Ops still can and need to learn from Devs' experience in software engineering.

Another method to improve the build process is micro labeling for containers. This means to label each build with a set of properties to make it identifiable and allows us to see what source code version it reflects. Those properties can be:
  • artifact name
  • repository name
  • commit id
  • timestamp
  • branch name
To further help with automation you should use containers to deploy your applications as that means you have one way of deployment. Also containers make monitoring easier as the containers can collect the monitoring data from the application and expose it to the monitoring infrastructure. That way you can focus on the actual evaluation instead of collecting data.

One issue that arises with containers is that a container should be stateless but most applications do require state to function properly. Instead of using host volumes mounted into docker Nicholas presented a different approach. Ceph is a distributed storage system that can also run in a container. That way applications can store their data in Ceph but of course if the Ceph container dies the data is lost. So they set up a cluster of Ceph containers so if one dies it gets restarted by your orchestration software (Rancher in this case) and the new container syncs to the existing cluster within minutes. 

I think this is an interesting solution but I am not sure I would feel comfortable knowing that all my data resides in memory. Maybe they only use it for non critical data that must not be persistent at all costs, but I can't remember anymore.

Even though I was not exactly the target audience I took quite a bit from this talk and think Nicholas showed a lot of experience in this area. Especially while answering all questions in depth, two thumbs up!

Christian Robert - Doctors, Aircraft Mechanics and Pilots: What can we learn from them?

Well the title really says it all. Other professions have harnessed techniques to function under pressure situations, so why not try to learn from them.
  • Pilots
    • All information in the cockpit -> Have all monitoring data aggregated in one place so that you can spot problems at once. 
    • Standardized language -> Have a common vocabulary and a common understanding of it, e.g. what does "Done" mean?
    • Clear responsibilities -> Communicate in a clear and expressive way so that you are sure your colleagues understand. Also vice versa tell your colleague that you did understand.
    •  Checklists -> Always use checklist for important or complicated tasks, no matter how often you have done them in the past. Routine is one of the greatest dangers for your uptime.
    •  Plan B -> Always make sure you have a plan when the shit hits the fan
  • Aircraft Mechanics
    • Prepare your workspace -> Well this analogue did not work that great. It refers to the fact that a aircraft mechanic would not repair a turbine that is still attached to the plane. He would dismount it and the bring it to the workshop. In software terms you could translate it to, build your software modular so you only have to patch an isolated component.
    • Tool box -> Use the tools you know best and that suit the task at hand
    • Visualize processes -> Use a paper Scrum/Kanban board so people can move the post its themselves. This is one point where I disagree.
    • Document your decisions -> Documentation for system architecture and other project decisions so everyone can understand them.
    • Bidirectional communication -> Make sure everyone can follow the progress by using issue trackers like JIRA.
  • Doctors
    • Profession vs Passion -> Doctors want to help people but sometimes they have to deal with bills and pharma consultants. In every job there are tasks that you have to do even though you don't like them, e.g. meetings.
    • Understand your patients -> When talking to non techies we have to explain things so that they can understand. No tech mumbo jumbo.
    • prepare as good as you can -> A doctor does a lot of preparation for an operation by talking to the patient, running tests and making sure he has all the information he needs. Similar we should prepare before implementing features by gathering all requirements and information from the stakeholders.
And one they all have in common: "Don't panic. Stay calm." It does not help to have 10 people fix the same bug or doing things just for the sake of it. Focus what the issue is, how bad it really is and what is needed to tackle it.

Uwe Friedrichsen - Real-world consistency explained

Disclaimer: To understand this part you should know about ACID, CAP and BASE.

I don't know how to say this differently: This talk scared the bejesus out of me. Why, you ask? You will see.

Relational databases with ACID, that is what most of us started with. A simple MySQL or maybe Oracle or something similar and we were happy. All of those had ACID and so we could simply code on and be sure that all of our database operations would complete and we would not lose and data or even have inconsistencies.

But unfortunately this was never the case. All has to do with the 'I' in ACID: Isolation. Uwe did a great job explaining the technical details but here is the short of it (especially since I can't repeat it ;-)): ACID in it's textbook definition can only be achieved when the database uses the highest isolation level called "serializable" but hardly any database uses this due to serious performance impacts. Usually a databases uses "read committed" or "repeatable reads", those do provide some isolation but not perfectly. That is why you are very likely to see some kind of anomalies in every database. YUCK!!

So what about NoSQL databases? We have to deal with the CAP theorem but that is what BASE is for, so they all provide eventual consistency, right? Turns that is also not the case but most people just don't know it. You have to configure your database properly to ensure eventual consistency. One problem, that is quite common, is that often a database is chosen because it is hyped or sounds interesting, but not because it suits the problem. But especially with NoSQL this is very important since almost every NoSQL database was designed with a special purpose. If you look at Cassandra e.g. this database was designed to be as available as possible. As long as one node is alive you can write data into it. But consistency? You wish...

Check out the slides. The research Uwe has done is impressive and I can't repeat half of it here.

Another problem with NoSQL is, since there is no ACID but instead BASE, our programs have to take that into account. So the programming models are not as simple anymore as they were with SQL databases. 

So are we stuck with either ACID that does not work as good as we thought or BASE that makes programming much more difficult? There are a lot of things between ACID and BASE like Casual Consistency oder Monotonic Atomic View. But there is still a lot of research going on in this area but in the end our applications will have to do their part to make sure our data is safe.

I really recommend to check out the slides as this is a very interesting and also very important topic.

Sven Schürmann & Jens Schumacher - Container Orchestration with Rancher

A hands on report by two guys who spent the last year working with Rancher. To keep it short: It was good overview over some key features of Rancher that I will try to summarize below.

But first: Why do you need Rancher? Can't you do orcherstration with Kubernetes? Yes you can but there is more to container platform than plain orchestration. You still need stuff like access control, handling deployment on different stages, defining how a cluster should look like exactly and so on and so forth. And this is where tools like Rancher come into play. For the plain orchestration you can use Kubernetes in Rancher or other tools like Docker Swarm or Rancher's own Cattle. As of 2.0 Rancher will use Kubernetes by default.

Rancher Features:
  • Machine Driver: With this you can create new hosts dynamically. It supports various cloud providers but can also include custom scripts.
  • Rancher Catalog: Used to define templates for each service. A template consists of a docker-compose.yaml and rancher-compose.yaml where rancher-composes follows the docker-compose mechanics. You can integrate it with VCS so you can always roll back to older templates. You can use variables in the docker-compose file (like spring profiles for an environment) and define the allowed values in rancher-compose that can be selected when applying the template.
  • Infrastructure Services: 
    •  Healthcheck: Knows all containers running on a host and monitors them closely
    • Scheduler: Works together with Healthcheck. When a node is down the scheduler notices that there are fewer containers running than configured and starts up a new one.
    • Network: You can define different network types for your containers depending on where you are running, e.g. if you need IPSec or not. Contains a simple load balancing and service discovery 
    • Storage: You can configure the storage type and Rancher will apply the appropriate plugins. So far you can choose between NFS, NetApp SAN and Amazon EBS
This was only a small insight into Rancher and it's features but it got me interested and it shows that for larger landscapes you need more than "just" Kubernetes.

Dr. Jonas Trüstedt - Docker: Lego for DevOps

The last talk of the conference. Once more containers ;-) Quite a lot of this talk has already been covered by others, so I will not repeat that. What I took from this talk was a short comparison between Rancher and OpenShift
  • Rancher is easier for a first test setup where as OpenShift requires quite more confiugration to get started
  • OpenShift has a higher integration with development tools, e.g. you can just tell it to checkout a git repo with ruby code in it, then OpenShift will create a new image based on a ruby image and push that new image to your registry
  • OpenShift uses Ansible, so you have to install Ansible and at least get aquainted with it
  • According to Jonas OpenShift is more powerful


Thank you for ready this far, or at least for scrolling down. I hope you found at least some useful information in this post. Please let me know if you spotted an error or any misunderstandings on my part. After three days and lot of different topics it is quite possible that i messed something up. And as always, any comments are welcome :-)

Montag, 13. November 2017

Cambridge Double Trouble 2017

After a few years of only going to continental and mainly German tournaments I decided it was time to head over to the island again. By browsing TFF for the next upcoming British tournaments I finally found the Cambridge Double Trouble. This tournament had a few selling points. It has a rather unique rule set (only double skills allowed with 4 tier system and a rather low amount of skills, at least for the tier 1-3 races), it is a team tournament for 2 player teams but you can just sign up and get assigned another random single sign up (so you are bound to meet new people!!) aaaaand it is run by Schmee and Purplegoo.

Especially the latter point was important to me, as both are lovely chaps and the combination promised the best of both worlds. Schmee coming up with the wacky ideas for rules and evening entertainment and Purplegoo running the actual tournament with his routine and calmness. And what can I say, both delivered!

The travel connections where great and my hotel was only 10 minutes walk from the venue and 20 minutes from the train station. So all was set for a smooth trip. On Friday evening we had a get together in a nice pub The Castle Inn closer to the city center so I had a nice evening walk through the living areas of Cambridge to get there. We were only a small group of 6 people in total and only 3 were actually involved in the tournament (Schmee, Pipey and myself), the others being Schmee's lovely significant other Kate and two friends Mike and Matthew. So apart from the traditional Blood Bowl nerd talk we also had "regular" conversations and quite a few entertaining games included "cricket trumps" and "take it away" rounded off by some beers and pub food. A perfect start to the weekend.

Saturday morning started slow due to the land lord not being up in time to let us in but as this was the same as every year (according to everyone involved) no one was either suprised or stressed and with a bit of delay we were able to get started. The first step was to meet my random team mate Wolimorb, a rather new player with a few tournaments under his belt. To our surprise we were both using Humans with a similar roster but different skills. While we both had an Ogre with Block I opted for the lame 4 Guard Linemen variant and he took more fancy skills on his Blitzers like Dodge, Sidestep and even Leap. Especially the Leap Blitzer was used several times to great success and caused some serious head aches for his opponents! Go Woli!!!

In round 1 we were paired against Howlinggriffon (Skaven, vs me) and Mr Frodo (Underworld, vs Wolimorb). The Skaven had a rather crappy start only producing one knock down from roughly 12 block dice against my Linemen. In the end everything was rather tied up and I was able to chain push the screening players away from the ball carrier and take him down. From there it was a few turns of back and forth but finally I was able to secure the ball and score ion Turn 8. In the second half attrition really hit the Skaven and there was not much to be done so I was able to get a safe 2-0 win. Wolimorb drew his game 1-1 so we went straight to the top dogs :-)

Round 2 saw two Underworld vs Human matches where I faced besters while Wolimorb took on VultureSquadron. besters had to kick and the ball landed exactly on LoS one square from the sides which lead to the inevitable Blitz! kick off result. The underworld moved Skitter Stab-Stab under the ball and managed to build a screen around him before he caught the ball. So much for my offense then. Even though I got the ball lose in the next turn the sewer scum managed to recover and retreat. The following turns where dominated by a tight defense from my end forcing lots of rolls on the underworld and trying to maximize damage. Unfortunately it did not pay off at first. Only in the very last turn was I able to regain the ball and score a touchdown after having to do a 3+ dodge, 2 gfi, 3+ pickup, 4+ pass, 4+ catch and 3+ dodge. All after the reroll was gone for taking down Skitter. In the second half the Underworld offense tried to advance but I kept up the pressure as before so it was a tight affair. In the end I was not able to do enough damage/enforce enough turn overs so the game ended in 1-1 draw. My team mate got a whooping 0-0 so, we were still unbeaten.

Round 3 paired us off against the later winners, Wolimorb had to play Pipey with his Undead, who should later finish the tournament as best individual with an impressive 4/2/0. I got to play peo2223 with an interesting Norse roster, consisting of Wilhelm Chaney, a Block Snow Troll and 11 Linemen containing 2 Guarders and a Leader. And I got the offence and again the kick off was a Blitz!. This time the kick was deep so the Norse broke through and tried to put as much pressure as possible on the ball. I barely managed to recover but the Norse refused to die and started to do some Damage themselves. Especially the Snow Troll was very reliable and caused some major head aches on my side. With some luck I could advance in the opposing half and was finally able to squeeze through the Norse screen and scored in Turn 7. The Norse threw everything they had into the 2 turn drive and Wilhelm went off with the ball. I got him down and tried to secure the ball as good as possible. But it was not enough and Wilhelm did show he is worth his pay check by making a dodge, pickup and 2 gfi to score the 1-1. In the second half the kick went way over to the sidelines and Nuffle struck on the Norse, failed pickup, ball get's thrown in and lands just behind my defense line. I got the ball up and tried to secure it as good as possible. peo on the other hand was determined to get it back and a few turns later my numbers started to crumble. It was massive back and forth around the LoS until I saw a small chance for an opening and advanced. Then in turn 7 when peo decided to go for the draw instead of trying to win (probably the right decision at this point) I was able to make a bold move and sent the ball carrier through the lines. Unfortunately the much needed screening failed due to a dodge and so the ball carrier stood there rather scared awaiting his doom. peo had to do some rolls but managed to get 2 dice on the ball which luckily for me where not enough. So in the last turn I cranked up the boldness some more and scored with a 4+ and 3+ dodge. Pewh... On the other table Pipey was not so cooperative and gave my team mate a rather good whooping, so we ended with a second draw. Still awesome!!

The Saturday evening saw a collective dinner at a nice curry place, Schmee had organized the order before hand and despite some confusion with the order and the bill afterwards everything turned out ok and we enjoyed our meal and some interesting conversations with tournament veterans and newbies. Afterwards we went to a close pub with live music, played some pool (or rather the english played and I tried not to scratch the table too much) before we got schooled by some local semi pros ;-) This was followed by more talking and drinking until the pub closed down and we headed off to our beds.

Sunday morning gave us a new matchup, I got to play Darkson with more Norse, this time with Dodge Blitzers and a Mighty Blow Runner but no stars. Wolimorb had to take on Moodygits Undead. This time I had to play defence from the start but with pouring rain the Norse failed the first pickup. So I tried to swarm the opposing half to put on some pressure, after having a guarder sent to the ko box. In the second turn the Norse got the ball up caused some stuns and a second ko that I apoed. Due to constant hihg AV rolls the Norse managed to advance. After a few turns I was able to put pressure on the ball but the Dodge Blitzer survived a total of 8 block dice and managed to dodge out a few times only to finally fail the very last dodge into the end zone. This caused some understandable  distress on the other side of the table ;-) In the second half I tried to punch through the Norse but only had moderate success. At one point I had to take a gamble to move around a Norse bulk. The orginal plan was to use the ball carrier for a chain push into the fans but since the first action resulted in a double skull I had to readjust a bit and so a lino had to perform that blitz which meant that the ball carrier was not completely safe. So Darkson got a 3+ dodge and 2 gfi for 1 die block on the ball carrier which worked. The real problem was that the ball bounced to the fans and was of course thrown way back into my half. From then it was 3-4 turns of knocking the ball loose, failing to pick up, putting tackle zones on the ball etc from both sides. At one point I got the ball back with my catcher but there where two Norse linos that could reach with one dodge each. The others where behind a screen. I figured it would be better if only one of them was standing so there would be no second player to pickup the ball afterwards. That's why I tried to block with my ogre which boneheaded. That way the Lino that should have been blocked was able to open the screen and the  Mighty Blow Runner could blitz the ball carrier. At first I thought it was a mistake to try the block but now I am not so sure as a knock down would have been really helpful. But in the end it fired back and so we kept on struggling for the ball. I got it back again, but had to many players out for protection so got knocked down again. In the last turn I then failed the pickup due to the rain. So it ended in a 0-0 draw. Luckily my team mate managed to hold a draw as well, so we did not loose this round either.. wooot wooot!!

After a tasty breakfast/lunch break round 5 paired Wolimorb against Lycos with his Halflings, he was very successful with them last year but this weekend the little ones had a hard time, so good omens for Woli! For my J_Bone brought some Chaos Pact with fully packed Big Guys, Claw on the Mino and Block on the other two plus a Chainsaw. After losing the coin toss I decided to sacrifice some Linos but Nuffle threw a Blitz! over the fence together with a kick in the middle of the opposing half. So I tried to capitalize on this, blitzed through the screen and tried to push a blitzer forward but failed. So there was not mich benefit. But J_Bones Big Guys and the Chainsaw did have some trouble getting started. A lot of failed nega traits and kickbacks where a dominant factor in this half. In the end I got the ball and managed to score at the end of the half. The second half started with.. another Blitz! This time for the Chaos Pact. J_Bone got quite more out of his Blitz! than I did and so it was pretty tough to get the ball save. After a bit of rolling and praying I managed to advance and finally scored in the middle of the half. So we got our third kick off and of course, the third Blitz! This put the final nail into J_Bones coffin and even though his Big Guys decided that the second half was the time to maim some humans he was not able to score. Had his player removal worked that well in the first half I would probably not have had any chance but the dice were on my side and so I got a lucky 2-0 win. On the other hand Lycos' luck seemed to have come around and he creamed Wolimorb with the little fellas. But still, undefeated!!

Final round!! It feels weird when you are 3/2/0 after five games and THEN get paired with Halflings. But deeferdan2383 was on 3/1/1 with his and so posed a serious threat. Wolimorb faced MrZay's Orcs. The coin toss sent me into defence and I lost 2 rerolls. I decided to put the Ogre on LoS despite the Dirty Player Halfling. The idea was that the Ogre will be the fouling bait with his Thick Skull which gives good odds of staying on the pitch compared to the fouler being sent off. Well, the first block killed the unskilled Lino and the foul also caused a cas on the Ogre. The Dirty Player was sent off and the apo saved my Ogre. After that I killed a Halfling so we were on 9 vs 9 and I started to swarm the Halflings to force dice rolls. Unfortunately the Halflings refused to fail their dodges or die from my blocks while none of the trees (including Deeproot) failed their rolls. So I really struggled to keep them at bay, and it did not help that dan played it very solid ;-) In turn 7 came the inevitable Throw Team Mate attempt, which worked perfectly and so the little ones went up 1-0. So the Humans set up for the two turn touchdown which is quite hard against a LoS full of trees. But the fans decided to create some havoc and so we lost a turn due to a riot. So in the second half the pressure was on. I wanted to take down the two regular trees and surround them so the stand up rolls would be hard enough for at least one to stay down. The problem was, that before I could do all of that I had to get the ball safe but my thrower screwed up. So I was in a rather bad position and the trees AND  Halflings took out more and more players. I tried to break through but a successful Throw Team Mate gave dan a chance for a 2 dice wrestle blitz which worked. The ball was thrown in but only 2 squares and so landed next to the halflings. In the end I was not able to recover. The trees and Halflings were on fire not giving me the least of chances in this game. Well done, Dan!! Since the Orcs were also not in the mood to give away points, this sealed our first and only team defeat :-(

Then it came to the awards. As expected from the match results Pipey and peo won the team competition and Pipey was best individual player. deeferdan came second with his Halflings and of course got the "Best Stunty" award. At this point something unexpected happened. Since Pipey and Dan both already got an award, the tropohy for "Best Individual Player" went to the 3rd place, which was me... This took completely by surprise as it seems a bit odd to give that award to someone who had not actually won it. But everyone seemed to be okay with that and so I gratefully accepted the trophy. I was even more surplused when I realized that with the award also came a "real" prize. In my case I got one of the new boxed Dwarf teams from Games Workshop.

As expected the crowd soon departed as everyone was eager to get home in time. So I decided to stay in the pub for dinner trying the traditional sunday roast (when in Rome..) and enjoy the atmosphere.

To sum it all up, this tournament was an awesome experience on and off the pitch. So if you fancy a unique ruleset with great competion, good sportsmen and a well rouned supporting programme you should definitely go to the Double Trouble!! I can only recommend it to everyone. Thanks again Schmee and Purplegoo for organizing this and everyone else for the great inclusion and entertaining conversations.

Freitag, 23. September 2016

Humans Painting

A lot of time has passed since I painted my last team. But as I was stuck the couch for the last weeks due to a juicy knee injury I finally got it done and painted my human team.

The minis are an Elfball team from Impact, so the positions were not exactly like those on the Blood Bowl Humans and I had to juggle them a bit.

So two of the five safety players, which kind of resemble the blitzers, ended up being linemen, as I liked the postures of a lino better for blitzers.

As usual the painting is not that great, but considering my lack of practice and the fact that most colors have been very old and pretty claggy, I am still rather happy about the outcome.

Sonntag, 7. August 2016

Devoxx UK 2016

Ambience and Background

Three years ago London saw the first installment of Devoxx UK which I had the pleasure to visit. This year I decided it was time to come back and see how the conference has evolved and if it's value for
money is still as good as it was.

To make it short, yes it is! For a very reasonable ticket price you get two days full of talks spread over five tracks a decent catering and free water supply. The Business Design Center is a nice
location and of course London is always worth a visit.

In order to keep the ticket prices low the organizers have to bring a lot of sponsors, so the lobby was packed with stands. IMO this is not necessarily bad, at least you cab get a lot of goodies ;-)

Another cool thing is, that all talks are taped on video and the videos are all uploaded to youtube. So if you want to watch the talks just go to the Devoxx UK Channel

Day 1

Dot Con - James Veitch

The opening talk was a great start to the conference. A very funny narrative of what happens when you actually reply to scammer emails (like Nigeria Connection etc.) but in a special way.

James gave a summary of this experiences when messing with the people behind those emails and by that almost driving them mad. Usually I am not a fan of harrassing people, but if it happens to
frauds that try to steal the savings of innocent people who do not really understand that there is a whole con industriy out there, I strongly approve!

I highly recommend you watch the video to this talk, as no summary would do justice to it. It was hilarious.

Unfortunately it seems this video was not uploaded to the channel.. real pitty there...

Embracing Failure - Mazz Mosely

I chose this talk, because I remembered Mazz from a talk at the first Devoxx which was petty good, so I gave her another shot. You noticed immediately that she was very nervous speaking in front of
a rather large audience but apart from that the talk was good. The topic itself was more about what bad management looks like, especially when the project is in a bad phase. She drew the picture of
the typical jerk boss stereotype (pressumably from her first job) who responds to delays and errors by putting on more pressure but taking all the credit for success.

The story seemd to take a turn for the better when she told us about a meeting where she, as a young developer, dared to speak up in a review meeting, suggesting some improvements. Against all
expectations the more expirienced team mates backed her up and it seemed that from there on all would turn out good. But no, afterwarsd she got snubbed by the manager and it was clear that
there would be no happy future for her in this job. To make everything even sadder, the manager got promoted for the work the team did and no credit was left for those who actually saved the project.

In the end her morale was that you should speak, you should try to make your work a better place, even if it is hard. But if it turns out, that there is no way to make progress and that you will not be
happy where you are, then move on and find people who share the same values as you do.

Where's my free Lunch? - Hadi Hariri

For me, this one, can be summed pretty short. There are a huge number of online services that are, at a first glance, free. You can use them and no one is charging you. Hadi shows in detail in what way
you actually are paying for things like Google search or Facebook, just to name a few big ones. The bottom line is, you should not just blindly use everything out there. But make sure you protect your
data and understand what the service you are using, is taking from you in return.

Dials to 11 - Moderne Exteme Programming - Benjí Weber

Cool talk, revisting the principiles of Extreme Programming and Agile Development plus a few ideas and principile bulding on that. The concept I found most intriguing was Mob Programming, using the
brain power of the complete team to code complex core modules sounds like a really good idea. Other concepts revovled around Continious Deployment and Test Driven Development which did not strike me
as too revolutionary. The final point that I had to agree with is, that projects have to be refactored on a regular basis, especially when maintenance, testing, monitoring etc are too complicated.

Cybercrime and the Developer. How to start defending against the darker side of the web - Steve Poole

Now that topic did sound interresting, I was looking forward to hear about best practices against cyber attacks, backgrounds of the business etc. But in the end this was just the same old story of
"Check your app for security issues" and "The bad guys are out there" combined with some numbers on cyber crime and links to checklists for security issues. Breaking news.. And the biggest wtf moment
was a short advertisment break for a sponsor. Well, I am not going to get that time back...

Arduino and Java with the Intel Galileo - Simon Ritter

For this one, I was probably not the intended audience. I think I would have got more from it if I had at least some practial expirience with Arduino. Over a while it felt like an advertisement for
the Intel Galileo but soon turned into a report of the issues the Azul guys encountered when trying to get Java working on it.

Overall the style of the talk was good and it was interresting to hear what problems can occure on embedded platforms.

Extreme Profiling: Digging into Hotspots - Nitsan Wakart

Okay, this one went right into the nerd stuff. Nitsan gave a great overview over different profiling tools like VisualVM or Java Mission Control compared to what FlameGraphs in Java 8 can do and how
to use perf to help find performance issues within the Java application. If you like this kind of stuff, just go and watch the video, it is really insightful.

On Polymer and Smileys... or Polysmileys - Carmen Popoviciu

Finally some hands on coding! Carmen introduced the different features of Polymer and how it enables you to create reusable webcomponents. This was one of the talks where you saw from the first
second that the speaker really loves the topic. Carmen's passion and enthusiasm were very catching and the code examples she showed were precise, clear and easy to follow. One of my favorite talks
of this conference.

Polymer is abstracting the browser differences for webcomponents, but of course the webcomponent standard is changing. So Polymer has to adjust to those changes once those are final.

Busy Java Developers Guide to Hacking in Java - Ted Neward

Another talk with 10/10 nerd points. Ted had prepared a variety of special topics concerning tweakings of the jvm and the jdk. Basically it was an in depth tour of some of the more exotic settings
you can use to alter compiler and runtime behaviours. I must admit, I can't restate them here, so again I recommend watching the video. Ted's expertise combined with his presentation skills and his
relaxed and easy attitude made this a very enjoyable talk.

Knowledge is Power: Getting out of trouble by understanding Git - Steve Smith

I must admit, Steve knows Git. He gave an indepth overview of the basic principles Git was built upon. This started with the structure of the .git directory and how objects are created/stored and did
end with the power that the reflog gives you. In essence, if you screw up your git project, check the reflog to find the has before it all went wrong and then revert to that. For the details how to do
that just watch the video.

One thing I do not agree with though, is his hate towards git-flow. It is true, that the schema looks highly complicated and weird, but once you understand the basic system, it is rather simple and
intuitivly most teams implement something very similar. I do agree with his opinion, that you should only add as much complexity to your development workflow as necessary.

Git in Practice - John Stevenson

The last slot of the day was another Git topic, this time a Bird of Feathers (more a guided group discussion). Steve threw in some rather generic questions about workflows and team structures and
the group shared their experiences and opinions. There were no real surprises there, teams usually have some git flow like development process with feature and release branches that get merged back to
master/develop. One thing shocked my though, someone explained how their team uses code reviews to ensure code quality before merging the branch and, as I expected, there was a lot of nodding and
agreement among the other participants. But one guy spoke up and said, that he considers that sad because "you obviously don't trust each other". That was a huge wtf moment for me, but I was relieved
that noone else shared this view. A team where this kind of attitude is common can just mean that there must be a highly inconsistent code base.. *shudder*..

Day 2

Microsoft and Open Source? Microsoft and Java? Really? - Giles Davies

Well, the title says it all, really. Like most IT guys I have some reservations when it comes to Microsoft, this is of course caused by their actions in the 80s, 90s and early 2000s. But seeing how
they are now trying to get to grips with IT reality (e.g. the development of Edge) I figured I give them a chance to convince me that they are not that bad anymore.

The picture Giles tried to draw was one of a "regular" IT company that uses different technologies to provide their customers with the best possible service. While that all sounded fair enough it was
more or less what was to be expected by this kind of talk. How much of this is really as good as it sounds only time can tell. But at least it is clear, that Microsoft tries to abandon it's old ways
and so they will probably again become a global player again who might rival with Apple or Google.

Composing Music in the Cloud - James Weaver

Mr JavaFX talking again. As always his experiences speaking style made the presentation itself worthwhile. The topic itself was nice but it actually did not blow my mind. Basically Jim did show how
he was using CloudFoundry to host a Spring-Boot application that can assist in different ways of music composition and how fast this can be done with modern technology.

If you are looking for some distraction after a work day with some kind of nerd flavour, this video is the way to go.

Java 9 Modularity in Action - Sander Mak, Paul Bakker

Oh yeah, some real bleeding edge stuff coming up. Sander and Paul gave a nice insight into the Java 9 modul concept that is intended to change how Java applications are written and run. One of the
main points for me is that this aweful classpath "thing" is meant to die with Java 9. Instead applications will be composed of moduels which expose certain classes and functions and can be combined
using provide and require declarations.

Does that sound familiar? Oh yes.. OSGI!!!! The concepts of OSGI are rather old, the last time I really had to do with that was back in 2003/2004. But so we meet again. Don't get me wrong, this is
nothing bad. I really like the concept of OSGI as it also allows you to have different versions of modules available at the same time etc. So I am excited to see how this works out when it comes to
the "regular" Java world.

Of course it is not so easy to upgrade to this new version, at least not when you want to follow the new paradigm. First of, the JDK and JRE are using the module structure too, so if your
application is doing something nasty with those resources, it is very lickely going to break with Java 9. And how do you migrate to the module structure? Every application has a lot of dependencies
and those have to be migrated as well. But that was taken care of, you can use jar files as so called automatic modules, that way you can upgrade your dependencies step by step.

The downside is, that in practice those upgrade will not happen soon at most companies, because they are expensive and risky. So we will have to wait and see when Java 9 will truely be taking over.

Refactoring to Java 8 - Trisha Gee

Not so much bleeding edge, but still a very interessting topic as Java 8 is, unfortunately, still new for most companies. In additon, a talk with Trisha is always worth listening to and so was this one.
What I liked most about her approach was, that she did not do any of that fanboy "omg, you have to use all the features asap because this is new and therefore better than, omg" stuff. Instead she
took some of the most popular Java 8 features and examined them closer in terms of: How easy is it to migrate? How much can go wrong? And how does the performance change?

The results were somewhat disillusioning, not only did some patterns prove pretty tricky to to migrate to but also the performance analysis showed that there was hardly any performance gain from
using the new features and in some cases peformance even dropped.

Bottom line is, you have to decide for each language feature and each use case independently if it worth to migrate and you always have to run performance tests on the code base to compare the
behaviour under high load. For everyone who is about to upgrade to Java 8, this video is a must watch!

Faster Java By Adding Structs (Sort Of) - Simon Ritter

And some more Java action! This time about the ObjectLayout project from the guys at Azul. The idea behind this is, to bring the performance advantages that C/C++ have compared to Java due to the use
of structs.

With a struct defintion the compiler knows exactly how large the struct object is and all objects do have the same size, as all elements are only pointers to the real data. That way the code can make
assumptions of where a certain object is within an array etc. and thus take short cuts during the execution.

This is what the ObjectLayout project tries to bring into Java. For details you better watch the video, as I am sure I will mix something up if I try to summarize it here. Pretty weird stuff.

Building multiplayer game using Reactive Streams - Michal Plachta

The last talk of the conference was again more hands on. Michal showed us, how Reactive Streams work and hwo those can be used to create a nice simple multiplayer game in a live coding session. Nice
topic and the coding was ratehr easy to follow, so for me this was a pretty good end to the conference.


As I said before, the value for money ratio on this conference is still great. Lots of talks and variety of topic to choose from with some very good and experienced speakers.

Of course there are some rather minor issues, or rather things that I would not need which I want to sum up just for completeness sake.

In comparission to the first year, there where less tables, seats and especially power plugs available. This makes it a but cumbersome for us techies with all our electronic devices. I understand
that the space is limited and the organizers preferred to put up more buffets for lunch but still it would be great to get some more set up in the future.

There are also more talk slots that I did not mention, for one the Ignite talks. A series of 5 Minutes talks about some small or humorous topics. I did not get much from them and I really don't
see the point of those. But obviously there is an audience for that. The other are 15 Minute talks during the lunch breaks. Usually you are in line waiting to get to the buffet and so don't have
time to listen to them. It seems to be pretty ungratifying to give such a talk. But again, this is in place since the first year, so some people seem to like it.

The one thing that really kinda bummed me out was the organization of the DevRoxx party. The party took place at the second evening, but it was not really advertised that much apart from the
closing talk of the conference. It as then that I took a closer look at the home page and finally found the section about it and then saw that you had to go to some of the sponsors to get a ticket
for the party. At that point it was of course to late, but we decided to give it a try and went to the location just to see that quite a lot of people did not know about the tickets. So it was
not just my problem ;-) So maybe that is something to improve for the years to come.

The last thing I want to criticize is that even during the closing talk not only the sponsor stands got removed but also the wifi was taken down, which can be a problem for foreigners who have no
data contract and who want to look up things, like e.g. the party location.

Even though this seems to be a lot to complain about I want to stress the point, that this is really good conference considering the cheap prizes and that I am very sure you will not regret coming
to this in the future.

Sonntag, 15. Mai 2016

Front End Excursion °2 - BeyondTellerrand Düsseldorf 2016

So I am off to a conference once more, this time it is beyondtellerrand (#btconf) in Düsseldorf, Germany together with my colleagues Francesco Schwarz and Thomas Berendt. And yes, this is the second Front End conference in a row for me, go pixel pushers!! ;-)

Supporting Program

Running Man

Usually I would start with stuff like what happend between the talks or maybe at the warm up meeting. But in this case I just have to start with what we are now calling the "Jogging Man Incident".

On our way to the hotel, we encountered a jogger, a man in his middle years with a medium sized beer gut. Why am I telling this? Well, in order for you to understand that, I have to describe what he was wearing. The most ordinary part where is joggin shoes and sun glasses, bit more extra ordinary where the black cowboy hat and the black and white bandana. But what caused us all to kinda stop dead on our way was the rest of his outfit, a black and white striped speedo. And that's it.. 

Maybe that's nothing unusual in Düsseldorf, but boy did we have a laugh ;-)

Warm Up BBQ

On Sunday evening there was a Warm Up BBQ, hosted by the lovely people from sipgate at their beautiful office hidden in a nice and quiet backyard. There was lots of delicious food and drinks, even a rich variety of food for vegans (not that it matters much for me, but I think it is good they were that considerate). Also, as a side effect, we were able to preregister for the conference so we could sleep longer on Monday :-) 

But apart from that there was also a lovely talk by Eva-Lotta Lamm, title provocatively "Sketchers are better Thinkers". Even though I am not totally convinced, I did get a feeling how sketching can help you to think of creative solutions. Sketchers start with simple shapes like lines, triangles, squares or circles and then proceed on to more complex objects by composing them of those simple shapes. This way you practice e.g. how to split complicated requirements into simpler parts and thus come up with an elegant solution. 

Another point that Eva-Lotta illustrated nicely in this low tech but highly entertaining talk was a suggestion of how to put your skills into relation with those of others. She did so by giving a brief overview of how her sketches evolved over the years, from very very simple up to being rather sophisticated. So this is something she is very happy about, seeing her own considerate improvement, instead of comparing herself to people like professional painters/sketchers. Those are of course able to produce way more realistic images. So comparing yourself to those is not only very depressing but also totally pointless. Always look at what you did achieve not only at what you you did not yet.

Thank you for the Music

Throughout the whole conference there was not just some music playing, but instead they mixed a special beyondtellerrand song, that played at the start and end of the conference. That was a nice twist in itself, but it was not just soe playback there. Instead there was a dj, Tobi Lessnow, mixing live for us and you could see immediately how much fun he was having :-) And what was the coolest part was Tobi mixing quotes from the previous talk into his music during the breaks. I know it does not sound like much, but it really lifted peoples spirit and made the time to the next speaker pass by rather quickly, thanks for that!

Entertainment on the side

Oh we are not done yet with the entertainment. The organizers put up some really old vintage consoles, Atari, Nintendo etc.where you could play classics like Pac Man, Mario Brothers or even Pong! Yeah, those were the days! Really really awesome. 

But what cracked us up the most, were the chaps from Accenture Digital, with their booth display. 

For those who do not see what I am talking about, look at the line below "solutions" and in case you need more hints: After a while the first two words in that line were taped over rather quickly. 

Sorry guys, I know that was just a small mistake, but especially at a "front end" conference that just has to create lots of lols ;-) 

Day 1

Ok, enough dawdling, let's talk business.

Time and Creativity by Christopher Murphy

Chris was elaborating on the concepts of procrastination/late binding as well as t-shaped thinking and yak shaving. I am still not sure I totally got everything he meant, because it does not quite make sense to me yet ;-) But I will try to sum it up.

Chris' basic example was one we all know, you have a deadline for a project, but instead of working on it immediately you are doing other stuff and then at one point you realize that you must start now and work around the clock to finish on time.

The time that get's "wasted" until you start working is what Chris calls "fuck off time". At this point I was expecting suggestions how to motivate yourself to start earlier to get the project done properly. But thatwas not quite what Chis was going at. His point was more, that you should use the "fuck off time" creatively so that it is not wasted, as your time on earth is limited. He also explicitly stated that you should not work at that project but instead do something else. To me this seemed a bit odd as it implies that ususally the "fuck off time" is wasted and people are just sitting there waiting for the deadline to approach. 

Maybe my confusion stems from me being not a big fan of procrastination, because at my work I often encounter the results that people produce that are "fucking off" a lot and then leave a mess ;-) I think this technique is not for everyone and is often abused by people who just want to be lazy.

Another term Chris was using is late binding, basically a kind of deliberate procrastination in a way that you only do stuff when it has to be done. That way aou are able to take into account the most recent requirements and thus can create a solution that honours those best. While I understand this reasoning I do not totally agree in the context of builing software, for the same reason as above. Too often this idea is abused to just slack off and then produce rubish within a short amount of time. So again, this concept should not be adapted without questioning if it is really appropriate for the task :-)

The second topic in Chris' entertaining talk was about yak shaving and t-shaped thinking. Yak shaving describes what happens when you set of to do a small task and during this notice that something else needs to be done and so start with that before finishing the original task. Like when you start tidying up and then notice that a drawer is broken and then start fixing that but then you see that some of your tools are broken so you head off to the hardware store and then you see that you have a flat tire and so on. For Chris this is a rewarding technique as you occupy yourself with tasks fom different areas of expertice. This leads to the concept he calls t-shaped thinking, meaning that the future needs people that have a certain area of expertice that they excel in but also have basic knowledge in a lot of other areas. This allows them to connect ideas and approaches from all of those areas to create a more complete and satisfying solution for a problem.

And finally he was pointing out how importing it is to solve a problem by addressing it's root cause and not simply the symptons. This is were t-shaped thinking comes in handy again, as sometimes you have to think out of the box to really understand what the problem is and how to solve it in the best possible way.

Especially this is what I can relate and agree to the most from this talk. And while I disagree with Chris on some of his points, I must say this was a very entertaining talk and a great start to the conference. And in case I misunderstood anything in this talk, I would be grateful for any corretion :-)

Advice from a young Designer to younger Designers by Lil Chen

Second in line was Lil Chen. She spoke about her experiences when she started out in her first job and what she did learn from them. Basically all of her points can be applied to any profession not just designers and it boils down to a few important points.

First off it is important to learn how you can communicate about your job and duties with people from other areas, e.g. how a designer can explain to a developer or marketeer what males a design good or bad. And instead of just rambling about something being bad, explain what exactly is the problem with it, so that others can understand you and have a chance to relate to it. That way they have the chance to communicate with you and your colleagues in a better and more efficient way.

Second you have to actively promote your skills, you cannot just expect people to know what you can do. Instead you can start little side projects to show how you could improve certain aspects at your company. Make yourself heard without being arrogant, be encouraging and strife to make your and your colleagues work better.

Also it is important to show confidence when you start a new job, don't act like you don't care or are scared to take action. Try to be a productive member of your workforce. And last but not least, everyone is hired to fill a role, but noone says that role is set in stone. You can shape your own role to become something more than initially intended and thus become an even more valuable asset for your employer.

All things I can second and that everyone should keep in mind, not only when starting a job but also throughout our career.

An abusive Relationship with AngularJs... by Mario Heiderich

Mario is a security specialst and together with his team runs penetration tests on various systems and software. Mario is also a lightning speed talker, so it was a real challenge to keep up with this very technical and advanced talk and not miss anything :-)

He showed various ways how it was possible to exploit angularjs in order to get access to the top level javascript components like window, which should be protected by the angular sandbox. Especially interessting was how he illustrated the ping pong between his team and the angular developer team and how the bugs they reported got fixed and how they then set off to find new bugs. At one point they were able to tell who fixed a bug and sometimes predict what new bugs this fix most likely introduced - very impressive.

Their work has increased the framework security dramatically, but unfortunately it turned out that angular 1.x is still and will always be somewhat unsafe as long as you have user input in our application. An upgrade to 2.x should fix those problems, but would cause a rather large amount of work since the new version contains a lot of breaking changes. 

What I found most interessting was when he explained how they managed to introduce a deliberate security hole into the framework source code via a pull request seemingly inteded to fix a minor bug. This was done with knowledge of the Google security team and they took care that the bug was removed before the new version was released. But they did prove that the system is vulnerable and that nobody can be sure his software is not prone to security issues. Despite this kinda depressing result I really enjoyed this very technical talk.

Unreal.js by Michael Chang

Michael showed us different prjects he was part of during his career. One especially asonishing one was the visualisation of a sculpting process of various artists. By using tracking information of the sculpting tools and head mounted cameras, they were able to create some kind VR experience. You could see how the sculpture got created from the view of the artist and could also rotate around the scene while the sculpture was growing. This is very hard to describe properly, so I hope you will be able to see for youself in the video of the talk.

Another aspect was how Michael wanted to create some kind of Sim City like game, where you can see a city growing and a simulated timelapse with cars driving though the city and how the lights change during the night. All in all an ambitious project.

He showed us his first steps where he created a tool to draw streets and how he created images for buildings and lighting by using thee.js. It was also impressive to see his strive for perfection and how far he got with the game. Then at one point he had to update his libraries only to find that a component of three.js that was crucial to him got deprecated and thus he could not rely on it anymore. 

This taught him how dependend you are on the maintainers of open source projects. So after cnosidering his possibilities he came across Unreal.js which is a java script plugin to the Unreal4 engine. Finally he decided to move to this engine and he was able to show us some of the awesome capabilites of this tool, which allows you leverage the Unreal engiens features through a javascript API.

For me this talk was inspiring to have a look at Unreal.js and maybe create a small game of my own. Hopefully I can scrape together some time for this :-)

Designing socially impactful digital Experiences by Catt Small

This talk again addressed the more abstract aspects of technology projects. Catt elaborated on how well meant social projects can backfire due to lack of planing or research (a.k.a. Murphy's law butting people in the butt).

She pointed out that to create something truly meanignful you first have to really understand the complete scenario:
  • what is the exact problem you are facing, not only the symptoms but the root causes?
  • what do you need to provide to improve the situation?
  • sometimes you do not have to create a completely new experience but it is better to augment something existing
  • look at all possible failures and how those would impact people relying on your project
  • do you need an information app? or maybe a communication tool?or something completely different? 
  • what are the limitations of your project, not only technology wise
  • do you really have all th facts you need
  • did you ask for help on aspects you lack experience in?
  • when you got feedback, did you really consider it and learn from it?
  • test everything as soon and as often as you can
All of this is crucial to design something that is really meaningful to the society. Catt decided to create a game to improve the sex education in the United States. She did this, because she noticed that there was no standard among the different states and that more often than not young people were badly or even ill informed by their teachers or other adults. She wanted to change this and thus created an application that allows people to get informed about sexual topics without having to be afraid to be denounced by others. And even though she knows this will not change the world over night, she is sure it will contribute a small part in order to improve the situation. 

I think it is great, that today someone can build something like that and help to make the world a better place.

Typography on the Web is just like other Typography only much more interessting by Indra Kupferschmid

No web conference without at least one typography talk :-) 

We started off with the history of typography and layouting from the first newspapers and books up until the modern age. Indra showed examples of different kinds of typography - posters, books, scientific articles or ads. She showed the main differences between those typographies and what those differences were used to achieve.

Soon it became clear what she was going at, e.g. the posters had large catchy letters so the readers attention is drawn to the few keypoints. The ads were similar, not that much text but with strong emphasises. On the contrary books and articles were designed for smooth and easy reading, while the article was more structured to guide the readers focus to it's main aspects.

Indra pointed out how important it is to take into account how your text will be read when choosing fonts and layouts. If you have a long text the font should be clear and easy to read with a high contrast (yeah, I know the current blog style does not adhere to that.. I will try to make a better one, promise!).

Another good point was, how usage of different styles (bolds, italics etc.) can be used to guide the readers focus to the central aspects of a text. She rounded everything up with  a very good series of examples on good and bad choices for typography for all kinds of reading purposes. Very insightful.

Living Language by Ori Elisar

This was an unscheduled intermediate talk by Ori about his art project he put on display in the lobby. The background is the evolution of the Hebrew alphabet from the ancient to the modern version. To illustrate this he prepared a set of petri dishes and in each he put a culture of a bacteria in the form of a letter from the ancient alphabet. Then he added nutrition for the bacteria in the form of the modern version of the letter. That way the bacteria grew from the ancient into the modern form.

When he was happy with the result he killed the bacteria and thus conserved the spread of the bacteria for exhibition.

The Reinvention of normal by Dominic Wilcox

Okay, I must admit it: I was NOWHERE ready for what hit me there. Dominic Wilcox is definitly an unbelieveable creative mind. He has done tons of drawings and sketches about inventions or simply weird ideas. Some of them remind me of style of Monty Python in terms of weirdness and lunacy, which is a big compliment in my book.

No description can do justice to what this man does, so best check out this short movie about him. Here are just a few of the things I remember:

  • his office is a tree
  • reverse bungee
  • head mounted cereal crane
  • reverse hearing
  • tummy rumbling amplifier
But he is not only inventing funny stuff himself, more importantly he is running projects to encourage children to creative and innovative thinking. Make them explore their minds and show them that it is okay to have strange ideas and not being afraid to speak them. This is something I consider very valuable and hope he reaches as many children as possible with it.

This extremely funny talk was a perfect closer for day one and made me look forward to... 

Day 2

Resiliance by Jeremy Keith

As Jeremy put this so eloquently, this was the hang over slot. But boy where we in for a treat. First he started with the good old "History of the Internet" starting with ealry ARPA-Net and the origin of the World Wide Web at CERN by Tim Berners-Lee. 

Even though this is old news, Jeremy told it in an highly entertaining way. What was new though was his way to look at the basic principles that compse the internet as we know it today, HTTP, URI and HTML. Simple principles that let us compose a net of documents and information that can be extended easily and seemlessly. Anyone can put out a new website without needing permissions or being added to some central repository. And why is that? Because all core technologies are designed from ground up to allow this. They seem simple and sometimes even crude, but they are just very elegant.

More on the topic of resiliance Jeremy took a look on the three basic components that we use to build websites, HTML, CSS and JS. And although it is something we all know, it was still enlightening to hear him pointing it out that HTML and CSS also are from ground up resilient technologies.

You can still open websites from the very first day in any browser, likewise old browsers still can display websites with new HTML. This is due to the fact that with HTML the browser just ignores tags it does not know, it simply ignores them but still displays the content within that tag. That way it is virtually impossible to break a webpage by HTML. Same goes for CSS, what ever the browser does not recognize just is ignored but it still goes on parsing the remaining HTML and CSS to show as much as possible to the reader while not showing an errors to the reader.

JS on the other hand is very non-resilient, If you put something in your JS that the browser cannot handle, execution will break. The rest of the JS will not be parsed or executed and there will be errors, at least in the browser's developer console. This is due to the fact that JS is an imperative language as opposed to HTML and CSS which are declarative languages.

So it would be good to use HTML and CSS as much as possible and as less JS as possible, right? So, why don't we do it? Sometimes it is because we do not know better, but more often than not, it is because we want to build functionality that can only be implemented with JS. But history shows, if a feature becomes popular and important enough, people start to put this functionality into the browser's HTML parser. And then we can use HTML for this feature, making the site more resilient again. 

The last thing I took from this talk is the advice to build your website in a way that as many people as possible can enjoy it. Meaning you build the page with the most basic of feature sets so that old browsers can still make use of it. Then you add additional convenience features leveraging newer technologies. That way up to date browsers can provide the best of all possible experiences for their users, while older browsers can still provide access to the data.

I am sure this is not always perfectly feasible but it is definitly something to keep in mind when building a page.

Designing meaningful Animation by Val Head

Usually when I hear people talk about animations I get the creeps, because I have seen to many pages loaded with bollox animations all over. This is something Val is also aware of and her mission is to sharpen peoples awareness about how to use animations in way that make a webpage work more seemlessly and more consistently.

In order to show us what that means she put on a little live coding session. During that she showed how to use the cubic bezier curve to get nice and smooth movements as the default animation properties in CSS are somewhat limited.

Unfortunately that is another talk that does not make sense to depict here, so you just better watch the video. I think it is really worth it. My colleague Thomas Berendt got hooked up immediately and spend the rest of the day hacking together animations at code pen during the rest of the conference while listening to other talks. So Val really hit it off there :-)

Your Hero Images need you! Save the Day with HTTP2 Image loading by Tobias Baldauf

Now how can I put this? IT... WAS... AWESOME!! I do not mean to offend anyone, but this is exactly the kind of talk I would like to have way more of. Filled with reliable research data, back up information on all statements and everything explained in detail. And on top, some hands on examples. It just swooped me away!

Oh yeah, back to the actual topic. Tobias did explain what can be done to speed up image loading on your web site. Now you will say: Ahh.. I know about that stuff already. But behold, there is something new to this that is not widely known yet.

But first we gonna recap the known stuff. Step 1 - Switch to HTTP2. With HTTP2 resources from the webserver are streamed allowing browsers to use more efficient ways of displaying data quicker. Also the clients can load resources in parallel instead of sequential as in HTTP1.x. That way pages load much much quicker .

Step 2 - Use optimal compression. Here Tobias suggests to use mozjpeg, which is, according to him, currently the best and most efficient jpeg encoder on the planet. That way you can drastically reduce the amount of data being sent over the wire.

Step 3 - Use progressive jpeg images. With progressive images the browser can display images to the user even though they are not fully loaded yet. Non-progressive images are encoded sequentially, so the browsers loads and displays them line by line, which we all know from slow connections. With progressive encoding it is basically like the browser is sending a small amounts of every 64bit package jpeg is encoded in. With every frame that gets transmitted the image gets clearer and sharper but is pretty much already completely comprehensible with 15% of the data transmitted.

Now this is, what you probably have already known. What is very likely knew is (Step 4) that you can toy around with the frame settings. By using e.g. less frames and transmitting a bit more data in the first frames, the user has an almost completely usable webpage much much quicker. The rest of the image data is still loaded and sharpens the images more until full detail.

All of this is very valuable information in our times when it is very important to provide readers with workable webpages as quick as possible.

For me, this talk was the highlight of the conference. Just great!

Cracking the Code by David Jonathan Ross

Despite the title, this was another typography talk ;-) But it was indeed different than usual, because this time it was about fonts for us developers. So fonts that you can use in your IDE and editors to make your code as readable as possible. 

David gave a detailed overview over a lot of coding fonts from the early days like Win95 until now and what he learned from them.

All this led him to the development of his own coding font, called Input. A free font with 7 different weights and 4 different widths. From there on, it was more a bit of advertising his font but I did not mind as he was making good points on how he got around to making the design choices for Input.

Bye, bye Screen by Andrea Krajewski

This talk was concerned with what we call the Internet of Things. Humans are transfixed on things, because our mind kind of needs things to comprehend the world. That's way we also picture abstract concepts like e.g. time as an object, a thing.

Andrea introduced several projects from her students that try to combine sensual impressions with technology. One of this projects is called Juno and it consists of two roundish objects that are meant to transmit vital signs and moods over the internet to make long distance relations more bearable. When ever someone touches his Juno it reacts to body temprature, pulse etc. and then the counter part emits light and maybe sounds to transport all of this information to the partner on the other side. At least that's what I understood.

Another project called Miro is meant to determine how you feel and what you need and then make suggestions on how to keep or imporve your health. This would be something like telling you to get running or eat/drink something e.g.

Unfortunately there is not much more that I took from this talk, not sure if I missed a key point. If so, please let me know.

How to get the Public to fund your daft Ideas by Mr. Bingo

Ufff, the final talk. As it turns out, this would be a similar finisher as the day before. Extremly funny and entertaining. Mr. Bingo is also a very creative mind, as he likes to draw funny and also often very offending sketches and images, which he became quite famous for.

At one point he offered on twitter to send an abusive hate mail post card to anyone who responds to his post. And there were like 50 replies within a few minutes. In fact the resonance was so big he could not handle it all and so had to shut down after a short time. He is still opening his services from time to time for a few minutes because then he has so many requests again that it takes him weeks or months to process them all.

Now this does sound pretty weird, why would you pay someone to insult you by post? Well, to understand that you have to see his artwork. Some of it was also exhibited in a museum. And once again the medium of text fails me to describe what we saw on his slides. So again, please just watch the video of the talk and you will see what I mean.

Ok, then we came to the topic, crowd funding. Mr. Bingo wanted to print a book of his drawings, but not just some book. He wanted it to be a top class book. With high quality paper, neat designs on the cover, just everything top notch. But the cost for such a book as simply to high for any publisher to take on the project.

And that's where Mr. Bingo wanted to give kickstarter a try. When he had a look at the promotial videos for other kickstarters he considered them a tad boring, all being basically the same. He wanted something different. So he set out to make bad ass rap video.

So the main part of the talk is about him describing how he went about to create this video, his mistakes in this and all he learned during the process. All that is too much to write down and I could not give enough credit to the humor in it. So I have once again tell you to just watch the video. Believe me, it is worth it!!!

Rounding it up

After that last talk, we set off to the airport to catch our flight home. But we spend all the time talking about this really great conference and all the laughs we had. And all thanks to Marc Thiele.

As you might have noticed I linked the videos to the talks and to the speakers twitter profiles in the respective subheadlines. Unfortunately at the time of wrting this, the videos of Dominic Wilcox and Mr. Bingo are not available. I really hope they will be put online as those are the two talks that would benefit most from it.

And finally a little disclaimer. All what I wrote above is simply my personal opinion and impression. Should I have forrgotten or misunderstood anything important, please let me know and I will gladly correct it. Otherise I hope you enjoyed reading about my experience and to everyone who made it through this rather lengthy post - Thank you!