Face recognition as second factor authenticator with Keycloak and AzureML

Keycloak is an open-source enterprise grade Identity and Access manager with extensive features and core integration with a variety of protocols like OpenID connect, SAML and OAuth 2.0. It adds support for social logins as well as LDAP or Active Directory servers.

Keycloak secures all applications as it supports Single Sign-on i.e., users login through keycloak instead of individual applications. It provides role based authorisation as well as fine grain control of its many aspects, and is thus a very viable alternative for both enterprises and individuals for Access management and Authorisation needs.

Lets have a look at how to extend keycloak’s functionality to include face recognition in a two factor format, so as to tackle the need of finger print biometrics in the age of social distancing. We will be using AzureML as our Face recognition service.

AzureML provides a slew of easy to use APIs from their Azure Face service to detect, identify, verify, perform analysis, etc on photos. We will be using their identify and verify APIs to identify the user in the photo – more in validateFace() later.

Step 1: Set up

Getting a default instance of Keycloak is extremely simple. On AzureVM as simple as downloading the source and running a command. You can download the source and look at the getting started guide here. If you are staring Keycloak in a container, make sure you have easy access to the source directory.

Henceforth, I will assume that your Keycloak instance is setup, with you being able to access the login page with you having access to the source directory.

Before using the Face Service from Azure we will have to create an Azure account and subscribe to the Face API and add the faces of people you want to recognise in person groups. Then train the face list. Microsoft has easy step by step documentation for this and their APIs are extremely simple with code provided for API calls as well, so I will not be covering that here. You can visit subscribe for subscribing to the API, quick-start and APIs for getting started and setting up face lists, person groups and training. After you have the Face API setup, let’s dive in.

Custom Authentication using Service Provider Interfaces(SPI)

We will be creating a Java maven project using the SPI provided by Keycloak to extend the authentication functionality. We will be adding face recognition in addition to regular authentication.

SPIs provide us with a number of exposed methods which we can make use of in our implementation. We will have a look at these methods as we move forward.

An overview of steps to come:

  • Create a maven project with Keycloak SPI
  • Implement the Authenticator interface of Keycloak
    • configuredFor()
    • setRequiredActions()
    • authenticate()
      • addRequiredActions
    • action()
    • validateFace()
  • AuthenticatorFactory
  • Deploy the authenticator
  • Enable in admin console
  • Done!

Step 2: Create a maven project with Keycloak SPI

We first need to create a maven project with the required dependencies to use use the Keycloak’s provided SPI. The required dependencies for custom authenticator are

  1. keycloak-core
  2. keycloak-server-spi
  3. keycloak-server-spi-private
  4. keycloak-services

The scope for all is provided. An example pom is present here i.e. from the official Keycloak GitHub.

We also need httpclient from org.apache.httpcomponents for our http requests.

Run the mvn install command to download the dependencies. Now that the project is ready lets move to the next step.

Step 3: Implementing the Authenticator

The extended Authenticator class is where the crux of our authentication logic will reside.

First lets implement this class and give it a name. Let’s call it FaceAuthenticator.

public class FaceAuthenticator implements Authenticator

Next lets implement the methods from the Authenticator interface. Let’s see the methods that make this interface.

configuredFor()

This method is to check if the user is configured for this authenticator and is implemented in a CredentialProvider. We shall not be using this so we will return true. CredentialProvider is useful to implement and can be used to store, validate user credentials. Example of this can be found here.

@Override
public boolean configuredFor(KeycloakSession session, RealmModel realm, UserModel user) {
	logger.debug("configuredFor called ... session=" + session + ", realm=" + realm + ", user=" + user);
//  return session.userCredentialManager().isConfiguredFor(realm, user, "secret_question");
	return true;
}

setRequiredActions()

This method is called if the configuredFor returns false and this authenticator is required in the current flow, only if isUserSetupAllowed in the AuthenticatorFactory(we will look at this) also returns true. As we are not using the configuredFor method we do not need this either.

 @Override
    public void setRequiredActions(KeycloakSession session, RealmModel realm, UserModel user) {
	//user.addRequiredAction(“SECRET_QUESTION_REQUIRED_ACTION_PROVIDER_ID”);
    }

authenticate()

This is the initial method called when the authentication execution is visited. This method is used to see if the user already verified himself(for onetime verification/trusted machines)  -OR-  if this should run for every login attempt(OTP login). This method is responsible to see if the authentication should run or be skipped to continue the flow. This also starts the login ftl form.

We will start our face validation ftl file here. If need be we can skip face authentication here. On how to skip the authentication if cookie is step have a look here.

@Override
	public void authenticate(AuthenticationFlowContext context) {
		logger.debug("authenticate called ... context = " + context);

		Response challenge = context.form().createForm("face-validation.ftl");
		context.challenge(challenge);
	}

face-validation.ftl

Keycloak uses Freemarker Template to generate HTML. We can override these ftl files to modify the Keycloak UI. We will be creating a new file for face recognition. Let’s name it face-validation.ftl .

We will be showing a video feed from the device camera and provide a button to take a photo. On clicking the submit button this photo will be used by the back end for face recognition and authentication.

We will be importing and using Keycloak’s default template layout along with its classes. Let’s use mediaDevices for the video stream. The entire ftl file is below.

<#import "template.ftl" as layout>
<@layout.registrationLayout; section>
    <#if section = "title">
        ${msg("loginTitle",realm.name)}
    <#elseif section = "form">
        <form id="kc-totp-login-form" class="${properties.kcFormClass!}" action="${url.loginAction}" method="post">
            <div class="${properties.kcFormGroupClass!}">
                  <div>

                    <div class="${properties.kcLabelWrapperClass!}">
                        <label for="totp" class="${properties.kcLabelClass!}">Capture your photo and submit</label>
                    </div>
                    <br>
                    <br>
                    
                    <video style="width:100%" autoplay></video>
                    <canvas  class="${properties.kcInputClass!}" style="display:none;"></canvas>
                    <br>
                    <br>
                    <input id="imageCanvas" name="imageCanvas" type="text" hidden/>  
                    <div style="width:100%; text-align: center;">            
                        <span class="btn" style="margin:20px" id="takeScreenShot">Capture</span>
                        <img style="width:25%"></img>
                    </div>
                    <script>
                            const constrains = {
                                video: {
                                    width: {min:1, ideal: 360},
                                    height: {min:1, ideal: 240}
                                }
                            };
                            const video = document.querySelector('video');

                            navigator.mediaDevices.getUserMedia(constrains)
                            .then((stream) => {video.srcObject =stream});
                            
                            const buttonTakeScreenShot = document.querySelector('#takeScreenShot');
                            const canvas = document.querySelector('canvas');
                            const img = document.querySelector('img');
                            const imgText = document.querySelector('#imageCanvas');

                            buttonTakeScreenShot.onclick = video.onclick =  function(){ 
                                canvas.width = video.videoWidth;
                                canvas.height = video.videoHeight;
                                canvas.getContext('2d').drawImage(video, 0, 0);
                                img.src = canvas.toDataURL('image/jpg');
                                imgText.value = img.src;
                            }
                            

                    </script>
				  </div>

            </div>

            <div class="${properties.kcFormGroupClass!}">
                <div id="kc-form-options" class="${properties.kcFormOptionsClass!}">
                    <div class="${properties.kcFormOptionsWrapperClass!}">
                    </div>
                </div>


                <div id="kc-form-buttons" class="${properties.kcFormButtonsClass!}">
                    <div class="${properties.kcFormButtonsWrapperClass!}">
                        <input class="${properties.kcButtonClass!} ${properties.kcButtonPrimaryClass!} ${properties.kcButtonLargeClass!}" name="login" id="kc-login" type="submit" value="${msg("doSubmit")}"/>
                 </div>
            </div>
        </form>
        <#if client?? && client.baseUrl?has_content>
            <p><a id="backToApplication" href="${client.baseUrl}">${msg("backToApplication")}</a></p>
        </#if>
    </#if>
</@layout.registrationLayout>

action()

Invoked after the verification form is rendered and submitted. Here the form data is validated and authenticated. For us this will be the image of the user. Based on this validation the authentication is successful.

Here we will call a method validateFace() which will do call the face recognition API and return the name of the person in the image if one exists. We then match the name of the current user with the returned user for the image. We are matching the user name here for better understanding. This could be the user id instead of the name. The action() function is below.

@Override
	public void action(AuthenticationFlowContext context) {
		logger.debug("action called ... context = " + context);
		Response challenge = null;
		String recognised = validateFace(context);

		String faceRecogName = context.getUser().getAttribute("face_recognition_name").get(0);

		if(validateFace(context)) {
			context.success();
		} else {
			challenge = context.form()
					.setInfo("Hello " + recognised)
					.createForm("face-validation.ftl");
			context.failureChallenge(AuthenticationFlowError.UNRECOGNISED, challenge);
		}

	}

face_recognition_name is the identifier/name we will store in the UI for a user which will be matched with the name returned by the face recognition API.

validateFace() using AzureML Face API

The validateFace() function calls the detect Face API from Azure which returns the faceId, which we will use to call the verify API with the personId to verify if the person same.

We will get the image from the context then send the image as a byte array to the detect api then call the verify API using the personId and personGroupId from the attributes of the user along with faceId returned by the detect API and return the result of the same to the action function. The code for validateFace is given below.

protected Boolean validateFace(AuthenticationFlowContext context) {
		MultivaluedMap<String, String> formData = context.getHttpRequest().getDecodedFormParameters();
		String imageData = formData.getFirst("imageCanvas");
		//           System.out.println("imageData ==================>" + imageData);
		imageData = imageData.split(",")[1];
		byte[] decodedImage = null;
		try {
			decodedImage = Base64.getDecoder().decode(imageData.getBytes("UTF-8"));
		} catch (UnsupportedEncodingException e1) {
			e1.printStackTrace();
		}
		try (CloseableHttpClient client = HttpClientBuilder.create().build()) {
			
			 final String subscriptionKey = "<Subscription Key>";

			 final String uriBase =
			        "https://<My Endpoint String>.com/face/v1.0/detect";
			
            URIBuilder builder = new URIBuilder(uriBase);
            builder.setParameter("returnFaceId", "true");
            builder.setParameter("returnFaceLandmarks", "false");
            builder.setParameter("returnFaceAttributes", "");
            URI uri = builder.build();
            HttpPost request = new HttpPost(uri);
            
            request.setHeader("Content-Type", "application/octet-stream");
            request.setHeader("Ocp-Apim-Subscription-Key", subscriptionKey);
            
            ByteArrayEntity reqEntity = new ByteArrayEntity(decodedImage);
            request.setEntity(reqEntity);

            HttpResponse response = client.execute(request);
            HttpEntity entity = response.getEntity();
		
            String faceId = "";
            
			if (entity != null)
            {
                // Format and display the JSON response.
                System.out.println("REST Response:\n");

                String jsonString = EntityUtils.toString(entity).trim();
                if (jsonString.charAt(0) == '[') {
                    JSONArray jsonArray = new JSONArray(jsonString);
                    System.out.println(jsonArray.toString(2));
                    if(jsonArray.length() > 1) {
                    	throw new Error("Multiple faces in the image");
                    }
                    faceId = jsonArray.getJSONObject(0).getString("faceId");
                }
                else if (jsonString.charAt(0) == '{') {
                    JSONObject jsonObject = new JSONObject(jsonString);
                    System.out.println(jsonObject.toString(2));
                    faceId = jsonObject.getString("faceId");
                } else {
                    System.out.println(jsonString);
                }
            }
			
			
			URIBuilder builderVerify = new URIBuilder("https://westus.api.cognitive.microsoft.com/face/v1.0/verify");


            URI uriVerify = builderVerify.build();
            HttpPost requestVerify = new HttpPost(uriVerify);
            requestVerify.setHeader("Content-Type", "application/json");
            requestVerify.setHeader("Ocp-Apim-Subscription-Key", subscriptionKey);

    		String personId = context.getUser().getAttribute("personId").get(0);
    		String personGroupId = context.getUser().getAttribute("personGroupId").get(0);


            // Request body
            StringEntity reqEntityVerify = new StringEntity("{\n" + 
            		"    \"faceId\": \""+ faceId +"\",\n" + 
            		"    \"personId\": \""+ personId +"\",\n" + 
            		"    \"personGroupId\": \""+ personGroupId +"\",\n" + 
        			"}");
            request.setEntity(reqEntityVerify);

            HttpResponse responseVerify = client.execute(request);
            HttpEntity entityVerify = responseVerify.getEntity();

            if (entityVerify != null) 
            {
                String jsonStringVerify = EntityUtils.toString(entityVerify).trim();
                JSONObject jsonObjectVerify = new JSONObject(jsonStringVerify);
                return jsonObjectVerify.getBoolean("isIdentical");
            }

		} catch (IOException | URISyntaxException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
			return false;
		}
		return false;

	}

Step 4: Implementing the AuthenticatorFactory

Lets now create an interface that implements AuthenticatorFactory, ConfigurableAuthenticatorFactory that helps us create the authenticator and add options to it that will be available in the UI such as requirement levels, any parameter or properties that should be available to the authenticator like say the face recognition name.

Here we will give the requirement levels to be made available to the admin for this authenticator. For example a DISABLED requirement would mean that this authentication method would have the option to be disabled from the UI. Create configuration properties for authenticator(eg. Use this only for one time verification or use every time for login). We will be adding the face recognition name or id property here so we can compare this to the name or id returned by the AzureML API.

This will return a singleton Authenticator. The code is below:

public class KeycloakFaceAuthenticatorFactory implements AuthenticatorFactory, ConfigurableAuthenticatorFactory {

    public static final String PROVIDER_ID = "face-authentication";

    private static Logger logger = Logger.getLogger(KeycloakFaceAuthenticatorFactory.class);
    private static final KeycloakFaceAuthenticator SINGLETON = new KeycloakFaceAuthenticator();


    public static final AuthenticationExecutionModel.Requirement[] REQUIREMENT_CHOICES = {
            AuthenticationExecutionModel.Requirement.REQUIRED,
            AuthenticationExecutionModel.Requirement.OPTIONAL,
            AuthenticationExecutionModel.Requirement.DISABLED};



    public String getId() {
        logger.debug("getId called ... returning " + PROVIDER_ID);
        return PROVIDER_ID;
    }

    public Authenticator create(KeycloakSession session) {
        logger.debug("create called ... returning " + SINGLETON);
        return SINGLETON;
    }


    public AuthenticationExecutionModel.Requirement[] getRequirementChoices() {
        logger.debug("getRequirementChoices called ... returning " + REQUIREMENT_CHOICES);
        return REQUIREMENT_CHOICES;
    }

    public boolean isUserSetupAllowed() {
        logger.debug("isUserSetupAllowed called ... returning true");
        return true;
    }

    public boolean isConfigurable() {
        logger.debug("isConfigurable called ... returning true");
        return true;
    }

    public String getHelpText() {
        logger.debug("getHelpText called ...");
        return "Recognises Faces";
    }

    public String getDisplayType() {
        String result = "Face Authentication";
        logger.debug("getDisplayType called ... returning " + result);
        return result;
    }

   

    public List<ProviderConfigProperty> getConfigProperties() {
        
        return new ArrayList<ProviderConfigProperty>();
    }

    public void init(Config.Scope config) {
        logger.debug("init called ... config.scope = " + config);
    }

    public void postInit(KeycloakSessionFactory factory) {
        logger.debug("postInit called ... factory = " + factory);
    }

    public void close() {
        logger.debug("close called ...");
    }

	@Override
	public String getReferenceCategory() {
		// TODO Auto-generated method stub
		return "face-id";
	}
}

Step 5: Deploy the authenticator

Before we can go to the UI and enable the authenticator we need to add it to the Keycloak directory and make sure that Keycloak can identify and pick the authenticator. Here are the steps for that:

  • Paste all the fully qualified names of the factories we wrote in /keycloak-face-recognition-root-folder/src/main/resources/META-INF/services/<keycloak authentication factory class>
  • Build the package by running mvn package
  • Move package jar to keycloak_home/providers/
  • Move the templated (.ftl files) to kecloak_home/themes/base/login
  • Done!

Step 6: Enable the authenticator in Keycloak UI

We need to enable the authenticator in the UI. Lets follow these steps to get that done:

  • Go to Admin UI for Keycloak
  • Go to Authentication > Flows
  • Duplicate the browser flow
  • Add this custom authenticator
  • Set the face authentication as required
  • Authentication > Bindings
  • Bind the new flow where required
  • Set the personId and groupId for the users in users>attributes
  • Thats it!
The login page for face authentication.

Done!

We have now built a fully functional albeit with some improvements needed second factor authentication using Keycloak and AzureML.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s