18 May 2019

Thoughts on Type Variance

in Java arrays are covariant (can be assigned to an array of supertype)

Integer[] arr = {1, 2, 3};
Object[] objArr = arr; // allowed because of being covariant
objArr[0] = "Hello"; // runtime error: ArrayStoreException

But it comes at a cost that, it might fail at runtime (as in the last line)

in Kotlin, arrays are invariant:

val arr = arrayOf(1, 2, 3)
val objArr: Array<Any> = arr // compilation error

So, invariant, doesn't consider List<Integer> to substitue List<Object> nor vice versa,

However, covariant consider List<Integer> to substitue List<Object>, but not vice versa.
And contravriant consider List<Object> to substitude List<Integer>, but not vice versa.

in Kotlin, Lists (immutable lists) are covariant, the following works:
val list = listOf(1, 2, 3)
val objList: List<Any> = list

because Lists are immutable, we can't fail as in the Java array case.

in contravariant, the super type can substitute the subtype (might seems illogical at first),
but look at this (fiction example):

operateOnIntegers(Integer i) {

Number n = 30

because operateOnIntegers expected a narrower type than the passed one (expects Integer and passed Number),
it will never write a value into it that exceeds its limits.

24 March 2019

Example git persistent data structure

To continue on the post of Persistent data structure and git, I've created this repo: https://github.com/mhewedy-playground/how-get-works

Looking into the objects database, here's the database objects exposed:

$ git log
commit d5ef93b09fe8d80fa903894a1bac93d3a67d55d3 (HEAD -> master)
Author: Muhammad Hewedy <mhewedy@gmail.com>
Date:   Mon Mar 25 00:09:37 2019 +0300

commit 16dc45d02209a4bfcb26b066f18bf290507cf87f
Author: Muhammad Hewedy <mhewedy@gmail.com>
Date:   Mon Mar 25 00:05:20 2019 +0300
# first commit tree
$ git cat-file -p 16dc
tree 029ec860ecb064cf689695c176a5baafc910916a
author Muhammad Hewedy <mhewedy@gmail.com> 1553461520 +0300
committer Muhammad Hewedy <mhewedy@gmail.com> 1553461520 +0300

$ git cat-file -p 029e
040000 tree e9199b34206372b3d2b1e2c06b3ccfeaef6d8804 a

$ git cat-file -p e919
040000 tree b1d74266c8b55b9cd7796c056888b6edcc1d1a98 b

$ git cat-file -p b1d7
040000 tree 68aba62e560c0ebc3396e8ae9335232cd93a3f60 c

$ git cat-file -p 68ab
100644 blob 3b18e512dba79e4c8300dd08aeb37f8e728b8dad hello.txt
# second (HEAD/master) commit tree
$ git cat-file -p d5ef
tree 9d0986abb4d98c7b1a26e6a4efe2156981ebd583
parent 16dc45d02209a4bfcb26b066f18bf290507cf87f
author Muhammad Hewedy <mhewedy@gmail.com> 1553461777 +0300
committer Muhammad Hewedy <mhewedy@gmail.com> 1553461777 +0300

$ git cat-file -p 9d09
040000 tree 48bc9a2ae3efb7aef6095e4db249a5775b71d155 a

$ git cat-file -p 48bc
040000 tree 5899cb357c13a7e7fa8aacc9b73ad741877d5390 b

$ git cat-file -p 5899
040000 tree 68aba62e560c0ebc3396e8ae9335232cd93a3f60 c
100644 blob 345e6aef713208c8d50cdea23b85e6ad831f0449 test.txt

$ git cat-file -p 68ab
100644 blob 3b18e512dba79e4c8300dd08aeb37f8e728b8dad hello.txt

Which is represented by the following diagram

19 March 2019

Persistent data structure and git

From Wikipedia:
a persistent data structure is a data structure that always preserves the previous version of itself when it is modified. Such data structures are effectively immutable, as their operations do not update the structure in-place, but instead always yield a new updated structure.
Git own objects database is a persistence data structure.

Consider git repo with the following log:

commit 5a19382be2d700129a0c0ca81340a8858075501b (HEAD -> master)
Author: Muhammad Hewedy <mhewedy@gmail.com>
Date:   Tue Mar 19 23:35:34 2019 +0300

    Modifing menu

commit 64dba215678c5a888c178990da7186f8ada939b0
Author: Muhammad Hewedy <mhewedy@gmail.com>
Date:   Tue Mar 19 23:11:35 2019 +0300

    First Commit

When catting the latest commit, and the one previous to it, they will share the same unchanged element.

lets check the latest commit:

$ git cat-file -p 5a19382be2d700129a0c0ca81340a8858075501b
tree 80c92c5fa5a179dec9d7f6f0b81b45dd7d32742d
parent 64dba215678c5a888c178990da7186f8ada939b0
author Muhammad Hewedy <mhewedy@gmail.com> 1553027734 +0300
committer Muhammad Hewedy <mhewedy@gmail.com> 1553027734 +0300

Modifing menu

$ git cat-file -p 80c92c5fa5a179dec9d7f6f0b81b45dd7d32742d
100644 blob 3e34d35b8b00e443866d4e9fcbb152a308497147 menu.txt
040000 tree 9cbe2293128382f7d60125add044260f8630012a recipes

Now let's check its parent commit (the first commit):

$ git cat-file -p 64dba215678c5a888c178990da7186f8ada939b0
tree 8ae44ec1b6ef7e4b66eb7be36fba1046081d7128
author Muhammad Hewedy <mhewedy@gmail.com> 1553026295 +0300
committer Muhammad Hewedy <mhewedy@gmail.com> 1553026295 +0300

First Commit

$ git cat-file -p 8ae44ec1b6ef7e4b66eb7be36fba1046081d7128
100644 blob 23991897e13e47ed0adb91a0082c31c82fe0cbe5 menu.txt
040000 tree 9cbe2293128382f7d60125add044260f8630012a recipes

The recipes tree has not been changed, and since both commits share it, however, the menu.txt file has been changed, so each commit links to its own version.

In the end, the head of the persistent data structure always refers to the latest version of it. with the ability to track every single change.

03 March 2019

Different forms of application code

I've used LLVM compiler infrastructure to show the different stages the code can be in:

1. Source code

basic c source code:

$cat test.c
#include <stdio.h>

int main() {

    printf("Hello World\n");

    return 0;

2.  Intermediate code:

some platforms such as JVM, .net CLR and LLVM has an intermediate representation where the compiler compiles the source code into. in JVM its called bytecode while in LLVM it is called Intermediate Representation.

here's the LLVM IR of the above program:

$cat test.ll
; ModuleID = 'test.c'
source_filename = "test.c"
target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-apple-macosx10.14.0"

@.str = private unnamed_addr constant [13 x i8] c"Hello World\0A\00", align 1

; Function Attrs: noinline nounwind optnone ssp uwtable
define i32 @main() #0 {
  %1 = alloca i32, align 4
  store i32 0, i32* %1, align 4
  %2 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([13 x i8], [13 x i8]* @.str, i32 0, i32 0))
  ret i32 0

declare i32 @printf(i8*, ...) #1

attributes #0 = { noinline nounwind optnone ssp uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="penryn" "target-features"="+cx16,+fxsr,+mmx,+sahf,+sse,+sse2,+sse3,+sse4.1,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" }
attributes #1 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="penryn" "target-features"="+cx16,+fxsr,+mmx,+sahf,+sse,+sse2,+sse3,+sse4.1,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" }

!llvm.module.flags = !{!0, !1}
!llvm.ident = !{!2}

!0 = !{i32 1, !"wchar_size", i32 4}
!1 = !{i32 7, !"PIC Level", i32 2}
!2 = !{!"Apple LLVM version 10.0.0 (clang-1000.10.44.4)"}

3. Assembly code:

The generate bytecode or intermediate format usually either interpreted or compiled.
And the compilation either JIT (just in time) or AOT (ahead of time)

In the case of JVM, it is both interpreted and JIT compiled.
In the case of LLVM, the IR is AOT compiled into assembly code.

Here's the output of the previous IR translated into assembly code (in the proccess of AOT compilation):

$cat test.s
.section __TEXT,__text,regular,pure_instructions
.build_version macos, 10, 14
.globl _main                   ## -- Begin function main
.p2align 4, 0x90
_main:                                  ## @main
## %bb.0:
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset %rbp, -16
movq %rsp, %rbp
.cfi_def_cfa_register %rbp
subq $16, %rsp
leaq L_.str(%rip), %rdi
movl $0, -4(%rbp)
movb $0, %al
callq _printf
xorl %ecx, %ecx
movl %eax, -8(%rbp)          ## 4-byte Spill
movl %ecx, %eax
addq $16, %rsp
popq %rbp
                                        ## -- End function
.section __TEXT,__cstring,cstring_literals
L_.str:                                 ## @.str
.asciz "Hello World\n"


4. Object code:

Object code is the final machine (binary) code but for the specific module, in other words, it is not being linked with other runtimes libraries to form the complete binary.

The following is the example of object code for the code above:

$cat test.o
R__compact_unwind__LD8 `�__eh_frame__TEXTX@�
  PUH��H��H�=�E���1ɉE��H��]�Hello World
-  �$��������*A�C

5. Binary Code:

After linking the object code is a complete binary code be executed.

$cat test
�__unwind_info__TEXT�H��__DATA__nl_symbol_ptr__DATA__la_symbol_ptr__DATH__LINKEDIT  �"�   0 0h � 8

1ɉE��H��]��%�L�qAS�%a�h�����Hello WorldUH��H��H�=;�E���
     �"Q@dyld_stub_binderQr�r@_printf�__mh_execute_header!main%��`$@ __mh_execute_header_main_printfdyld_stub_binder%

In JVM, there's the GraalVM that can produce native binaries, and there was gcj part of gcc that can generate native binaries as well.

02 March 2019

Spring security multiple authentication provider

In your WebSecurityConfigurerAdapter you will need to add more than auth


protected void configure(AuthenticationManagerBuilder auth) throws Exception {

    auth.authenticationProvider(new MyFirstAuthenticationProvider(userRepository, bCryptPasswordEncoder()));

    auth.authenticationProvider(new MySecondAuthenticationProvider(userRepository, bCryptPasswordEncoder()));


Then Create MyFirstAuthenticationProvider and MySecondAuthenticationProvider like:

public class MyFirstAuthenticationProvider extends DaoAuthenticationProvider {

    public MyFirstAuthenticationProvider(UserRepository userRepository,
                                              BCryptPasswordEncoder bCryptPasswordEncoder) {


    public boolean supports(Class<?> authentication) {
        return MyFirstAuthenticationToken.class.isAssignableFrom(authentication);

public class MyFirstAuthenticationToken extends UsernamePasswordAuthenticationToken {

    public MyFirstAuthenticationToken(UserEntity principal, Object credentials,
                                                   Collection<? extends GrantedAuthority> authorities) {
        super(principal, credentials, authorities);
And the same for MySecondAuthenticationProvider.

You will need to use the authentication providers/token in the authenticaiton/authorization filters.

24 February 2019

Where to save user authentication token: cookies vs local storage

Here's a very good comparison:

The important part in the link above is that local storage approach is CSRF protected, but exposed to XSS.

see also:

01 January 2019

OAuth 2.0 and OpenID Connect (OIDC) flows

4 flows for OAuth 2.0:
  • Code flow:
    • auth server sends auth authorization code then client uses the code to get the access token
    • fit for web apps (where the work is done over a front-channel) 
    • has refresh token
  • Implicit flow:
    • auth server sending the access token directly
    • fit where the client is a native app (mobile, desktop, etc)
    • doesn't have a refresh token, but some people work around it.
  • Resource owner credential flow:
    • the client app gets a token by sending the username/password of the resource owner to the auth server
    • fit for enterprise-trusted apps (like on-prem services, like on-prem JIRA server when user enter his LDAP credentials), or regular client apps that connect to the corresponding backend service. (like mobile front-end connects to the backend, so the resource owner enters his own username/password into the client app to get an access token and a refresh token)
    • has refresh token.
  • Client credential:
    • service-to-service flow, no human interaction.
OpenID Connect (OIDC) builds on top of OAuth 2.0 by adding extra staff like id_token and userInfo endpoint and it reuses the first two flows (code & implicit for server and native clients respectively)