Thursday, September 27, 2012

Spring Security using API Authentication


While there are many blog posts that detail how to use Spring Security, I often still find it challenging to configure when a problem domain lies outside of the standard LDAP or database authentication. In this post, I'll describe some simple customizations to Spring Security that enable it to be used with a REST-based API call. Specifically, the use case is where you have an API service that will return a user object that includes a SHA-256 password hash.


The prerequisites for running this sample is Git and Maven, and your choice of IDE (tested with both Eclipse and IntelliJ).

The source code can be found at: After pulling down the code, perform the following steps:
  1. In a terminal window, cd to the Shared directory located under the root where the source code resides.
  2. Issue the command mvn clean install. This will build the Shared sub-project and install the jar into your local mvn repository.
  3. Within Eclipse or IntelliJ, import the project as a Maven project. In Eclipse, this will result in 3 projects being created: Shared, SpringWebApp, and RestfulAPI. In IntelliJ, this will be represented as sub-projects. No errors should exist after the compilation process is complete.
  4. Change directory to RestfulAPI. Then, issue the command mvn jetty:run to run the API webapp.  You can then issue the following URL that will bring back a User object represented in JSON: http://localhost:9090/RestfulAPI/api/v1/user/john
  5. Open up a new terminal window, cd to SpringWebApp directory located under the project root. Issue the command mvn jetty:run. This will launch a standard Spring webapp that incorporates Spring Security. You can access the single HTML page at: http://localhost:8080/SpringWebApp/. After clicking the Login link, login with the username of john and a password of doe. You should be redirected to a Hello Admin page. 

In order to demonstrate the solution, three maven modules are used, which are illustrated below:

  • SpringWebApp. This is a typical Spring webapp that serves up a single JSP page. The contents of the page will vary depending upon whether the user is currently logged in or not. When first visiting the page, a Login link will appear, which directs them to the built-in Spring Security login form. When they attempt to login, a RESTEasy client is used to place a call to the API service (described below), which returns a JSON string that is converted into a Java object via the RESTEasy client. The details of how Spring Security is configured is discussed in the following sections.
  • RestfulAPI. An API service that serves JSON requests. It is configured using RESTEasy (a JAX-RS implementation), and is described in more detail in the next section.
  • Shared. This contains a few Java classes that are shared between the other two projects. Specifically, the User object DTO, and the RESTEasy proxy definition (it's shared because it can also be used by the RESTEasy client).

RestfulAPI Dissection

The API webapp is configured using RESTEasy's Spring implementation. The RESTEasy documentation is very thorough, so I won't go into a detailed explanation of its setup. A single API call is defined (in UserProxy in the Shared project) that returns a static JSON string. The API's proxy (or interface) is defined as follows:

For those of you familiar with JAX-RS, you'll easily follow this configuration. It defines an API URI that will respond to requests sent to the URL path of /api/v1/user/{username} where {username} is replaced with an actual username value. The implementation of this service, which simply returns a static response, is shown below:

About the only thing remotely complicated is the use of the SHA-256 hashing of the user's password. We'll see shortly how this get's interpreted by Spring Security. When the URL is accessed, the following JSON string is returned:

The webapp's web.xml contains the setup configuration to service RESTEasy requests, so if you're curious, take a look at that.

SpringWebApp Dissection

Now we can look at the Spring Security configuration. The web.xml file for the project configures it as a Spring application, and specifies the file applicationContext-security.xml as the initial Spring configuration file.  Let's take a closer look at this file, as this is where most of the magic occurs:

Let's go through each of line numbers to describe their functionality.  Lines 3 through 5 instructs Spring to look for Spring-backed classes in the com.acme directory and that Spring annotations will be supported.  Line 7 is used to load the properties specified in the file (this is used to specify the API host). Lines 9 through 11 enable Spring Security for the application. Normally, as a child element to http, you would specify which pages should be protected using roles, but to keep this example simple, that wasn't configured.

Lines 13-17 are where the customizations to base Spring Security begin. We define a custom authentication-provider called userDetailsSrv through its bean ref. That bean is implemented through the custom class (line 19). Let's take a closer look at this class:

As you can see, this class implements the Spring interface  This requires overriding the method loadUserByUsername.  This method is responsible for retrieving the user from the authentication provider/source. The returned user (or if no matching user is found, a UsernameNotFoundException is thrown - line 28) must contain a password property in order for Spring Security to compare against what was provided in the form. In this case, as we've seen previously, the password is returned in a SHA-256 hash.

In our API implementation, the user lookup is pulled using the APIHelper class, which we'll cover next. The returned API data is then populated in the custom class called UserDetails.  This implements the Spring interface with the same name.  That interface requires an concrete implementation of the getUsername() and getPassword() methods.  Spring will invoke those in the next processing step of Security to compare those values against what was recorded in the web form.

How does Spring go about comparing the password returned in the SHA-256 against the form password value. If you look back at the XML configuration, it contained this setting:
Notice the passwordEncoder -- this reference points to the Spring class ShaPasswordEncoder. This class will compute an SHA-256 password of the password provided through the web form, and then Spring will compare that computed value against what we returned via the API.

Let's close this out by looking at the APIHelper class:

The first thing you'll on lines 8 and 9 is the injection of the property. As you recall, this was set in the file. This identifies the host in which to post the API call (since it's running locally, localhost is specified).  Lines 17 through 20 use one of the RESTEasy client mechanisms to post a JSON RESTful call (RESTEasy also has what is called client proxy implementation, which is easier to use/less code, but doesn't provide as much low-level control).  The resulting response from the API is then converted from JSON into the User Java object by way Jackson in line 26. That Java object is then returned to the UserDetails service.


As you can see, the actual work involved in customizing Spring Security to authenticate against an API call (or really any external service) is really rather straightforward. Only a few classes have to be implemented, but it can be tricky trying to figure this out for the first time. Hence, the reason I included the complete end-to-end example. 

Thursday, August 2, 2012

GWT 2.5 SuperDevMode and Google App Engine - Now an Amazing Combination

Prior to version 2.5 of GWT, I found it rather tedious to do development with GWT and Google App Engine. Both had separate development modes that didn't always play nice together, and often I found the code-test cycle to be very time-consuming (launching GAE's local development environment is pretty slow).

However, with the advent of GWT's new SuperDevMode (SDM), you can easily modify your Java client code and see it quickly reflected in GAE's development environment without any restart necessary. This is an addition to the other native Java debugging features introduced as a result of SDM. Below is a quick screencast of this capability:

For those of you without access to YouTube, you can download it directly here.

Look forward to your comments/suggestions!

(see my previous blog on setting up SDM in Eclipse).

Thursday, July 26, 2012

Setting-up GWT 2.5's SuperDevMode

Setting up SuperDevMode in GWT 2.5

One of the most exciting features of GWT 2.5 is what is called SuperDevMode (read about it at: As a replacement for the previous DevMode, it allows you to debug your Java code (and Javascript) directly through Chrome's Browser's Developer Tools. Perhaps because it is still an experimental feature, instructions are a bit vague on how to set it up. Since it took me a while to figure it out, I thought I would spare you the pain by sharing the screencast below (it's a little rough, but should be sufficient to get you started).

Note: For those of you unfortunate enough to work for a company that blocks YouTube (like you can't access it via your smartphone or tablet, duh) -- you can download from this link.

In my next blog, I discuss how SDM can be used in powerful combination with Google's App Engine development environment.

Wednesday, May 30, 2012

Authenticating for Google Services, Part 2


In the last blog entry I described how to use OAuth for access/authentication for Google's API services. Unfortunately, as I discovered a bit later, the approach I used was OAuth 1.0, which has apparently now been officially deprecated by Google in favor of version 2.0 of OAuth. Obviously, I was a bit bummed to discovered this, and promised I would create a new blog entry with instructions on how to use 2.0. The good news is that, with the 2.0 support, Google has added some additional helper classes that make things easier, especially if you are using Google App Engine, which is what I'm using for this tutorial.

The Google Developers site now has a pretty good description on how to setup OAuth 2.0. However, it still turned out to be a challenge to configure a real-life example of how it's done, so I figured I'd document what I've learned.

Tutorial Scenario

In the last tutorial, the project I created illustrated how to access a listing of a user's Google Docs files. In this tutorial, I changed things up a bit, and instead use YouTube's API to display a list of a user's favorite videos. Accessing a user's favorites does require authentication with OAuth, so this was a good test.

Getting Started

(Eclipse project for this tutorial can be found here).

The first thing you must do is follow the steps outlined in Google's official docs on using OAuth 2.0. Since I'm creating a web-app, you'll want to follow the section in those docs titled "Web Server Applications". In addition, the steps I talked about previously for setting up a Google App Engine are still relevant, so I'm going to jump right into the code and bypass these setup steps.

(NOTE: The Eclipse project can be found here -- I again elected not to use Maven in order to keep things simple for those who don't have it installed or are knowledgeable in Maven).

The application flow is very simple (assuming a first-time user):
  1. When the user accesses the webapp (assuming you are running it locally at http://localhost:8888 using the GAE developer emulator), they must first login to Google using their gmail or Google domain account.
  2. Once logged in, the user is redirected to a simple JSP page that has a link to their YouTube favorite videos.
  3. When the click on the link, a servlet will initiate the OAuth process to acquire access to their YouTube account. The first part of this process is being redirected to a Google Page that prompts them whether they want to grant the application access.
  4. Assuming the user responds affirmative,  a list of 10 favorites will be displayed with links. 
  5. If they click on the link, the video will load.
Here's the depiction of the first 3 pages flow:

And here's the last two pages (assuming that the user clicks on a given link):

While this example is specific to YouTube, the same general principles apply for accessing any of the Google-based cloud services, such as Google+, Google Drive, Docs etc. They key enabler for creating such integrations is obviously OAuth, so let's look at how that process works.

OAuth 2.0 Process Flow

Using OAuth can be a bit overwhelming for the new developer first learning the technology. The main premise behind it is to allow users to selectively identify which "private" resources they want to make accessible to an external application, such as we are developing for this tutorial. By using OAuth, the user can avoid having to share their login credentials with a 3rd party, but instead can simply grant that 3rd party access to some of their information.

To achieve this capability, the user is navigated to the source where their private data resides, in this case, YouTube. They can then either allow or reject the access request. If they allow it, the source of the private data (YouTube) then returns a single-use authorization code to the 3rd party application. Since it's rather tedious for the user to have to grant access every time access is desired, there is an additional call that can be played that will 'trade-in" their single use authorization for a longer term one. The overall flow for the web application we're developing for this tutorial can be seen below.

OAuth Flow
The first step that takes place is to determine whether the user is already logged into Google using either their gmail or Google Domain account. While not directly tied to the OAuth process, it's very convenient to enable users to login with their Google account as opposed to requiring them to sign up with your web site. That's the first callout that is made to Google. Then, once logged in, the application determines whether the user has a local account setup with OAuth permissions granted. If they are logging in for the first time, they won't.  In that case, the OAuth process is initiated.

The first step of that process is to specify to the OAuth provider, in this case Google YouTube, what "scope" of access is being requested.  Since Google has a lot of services, they have a lot of scopes. You can determine this most easily using their OAuth 2.0 sandbox

When you kickoff the OAuth process, you provide them the scope(s) you want access to, along with the OAuth client credentials that Google has provided you (these steps are actually rather generic to any provider that supports OAuth).  For our purposes, we're seeking access to the user's YouTube account, so the scope provided by Google is:

If the end-user grants access to the resource identify by the scope, Google will then post back an authorization code to the application. This is captured in a servlet. Since the returned code is only a "single-use" code, it is exchanged for a longer running access token (and related refresh token). That step is represented above by the activity/box titled "Access & Refresh Token Requested". 

Once armed with the access token, the application can then access the users' private data by placing an API call along with the token. If everything checks out, the API will return the results.

It's not a terrible complicated process -- it just involves a few steps. Let's look at some of the specific implementation details, beginning with the servlet filter that determines whether the user has already logged into Google and/or granted OAuth access.


Let's take a look at the first few lines of the AuthorizationFilter (to see how it's configured as a filter, see the web.xml file).

The first few lines simply cast the generic servlet request and response to their corresponding Http equivalents -- this is necessary since we want access to the HTTP session. The next step is to determine whether a CredentialStore is present in the servlet context. As we'll see, this is used to store the user's credentials, so it's convenient to have it readily available in subsequent servlets. The guts of the matter begin when we check to see whether the user is already present in the session using:
if (session.getAttribute(Constant.AUTH_USER_ID) == null) {
If not,  we get their Google login credentials using Google's UserService class. This is a helper class available to GAE users to fetch the user's Google userid, email and nickname. Once we get this info from UserService, we store some of the user's details in the session.

At this point, we haven't done anything with OAuth, but that will change in the next series of code lines:
try {
    Utils.getActiveCredential(request, credentialStore);
} catch (NoRefreshTokenException e1) {
    // if this catch block is entered, we need to perform the oauth process"No user found - authorization URL is: " +   


A helper class called Utils is used for most of the OAuth processing. In this case, we're calling the static method getActiveCredential().  As we will see in a moment, this method will return a NoRefreshTokenException if no OAuth credentials have been previously captured for the user. As a custom exception, it will return URL value that is used for redirecting the user to Google to seek OAuth approval.

Let's take a look at the getActiveCredential() method in more detail, as that's where much of the OAuth handling is managed.

The first thing we do is fetch the Google userId from the session (they can't get this far without it being populated).  Next, we attempt to get the user's OAuth credentials (stored in the Google class with the same name) from the CredentialStore using the Utils static method getStoredCredential(). If no credentials are found for that user, the Utils method called getAuthorizationUrl() is invoked. This method, which is shown below, is used to construct the URL that the browser is redirected to which is used to prompt the user to authorize access to their private data (the URL is served up by Google, since it will ask the user for approval).

As you can see, this method is using the class (from Google) called GoogleAuthorizationCodeRequestUrl. It constructs an HTTP call using the OAuth client credentials that is provided by Google when you sign up for using OAuth (those credentials, coincidentally, are stored in a file called client_secrets.json. Other parameters include the scope of the OAuth request and the URL that the user will be redirected back to if approval is granted by the user.  That URL is the one you specified when signing up for Google's OAuth access:

Now, if the user had already granted OAuth access, the getActiveCredential()method would instead grab the credentials from the CredentialStore.

Turning back to the URL that receives the results of the OAuth credentials, in this case, http://localhost:8888/authSub, you maybe wondering, how can Google post to that internal-only address? Well, it's the user's browser that is actually posting back the results, so localhost, in this case, resolves just fine. Let's look that the servlet called OAuth2Callback that is used to process this callback (see the web.xml for how the servlet mapping for authSub is done).

The most important take-away from this class is the line:
AuthorizationCodeResponseUrl authResponse = 
       new AuthorizationCodeResponseUrl(fullUrlBuf.toString());
The AuthorizationCodeResponseUrl class is provided as a convenience  by Google to parse the results of the OAuth request. If the getError() method of that class isn't null, that means that the user rejected the request. In the event that it is null, indicating the user approved the request, the method call getCode() is used to retrieve the one-time authorization code. This code value is placed into the user's session, and when the Utils.getActiveCredential() is invoked following the redirect to the user's target URL (via the filter), it will exchange that authorization code for a longer-term access and refresh token using the call:
credential = exchangeCode((String) request.getSession().getAttribute("code"));
The Utils.exchangeCode() method is shown next:

This method also uses a Google class called GoogleAuthorizationCodeTokenRequest that is used to call Google to exchange the one-time OAuth authorization code for the longer-duration access token.

Now that we've (finally) got our access token that is needed for the YouTube API, we're ready to display to the user 10 of their video favorites.


Calling the YouTube API Services

With the access token in hand, we can now proceed to display the user their list of favorites. In order to do this, a servlet called FavoritesServlet is invoked. It will call the YouTube API, parse the resulting JSON-C format into some local Java classes via Jackson, and then send the results to the JSP page for processing. Here's the servlet:

Since this post is mainly about the OAuth process, I won't go into too much detail how the API call is placed, but the most important line of code is:
feed = YouTube.fetchFavs(credential.getAccessToken());
Where feed is an instance of VideoFeed. As you can see, another helper class called YouTube is used for doing the heavy-lifting. Just to wrap things up, I'll show the fetchFavs() method.

It uses the Google class called HttpRequestFactory to construct an outbound HTTP API call to YouTube. Since we're using GAE, we're limited as to which classes we can use to place such requests.  Notice the line of code:
url.access_token = accessToken;
That's where we are using the access token that was acquired through the OAuth process.

So, while it took a fair amount of code to get the OAuth stuff working correctly, once it's in place, you are ready to rock-and-roll with calling all sorts of Google API services!

Wednesday, May 23, 2012

Authenticating for Google Services in Google App Engine

(NOTE: This tutorial uses the older, and now deprecated Oauth 1.0 approach -- See my new blog entry for an OAuth 2.0 tutorial).


This post will illustrate how to build a simple Google App Engine (GAE) Java application that authenticates against Google as well as leverages Google's OAuth for authorizing access to Google's API services such as Google Docs.  In addition, building on some of the examples already provided by Google, it will also illustrate how to persist data using the App Engine Datastore and Objectify.

Project Source Code

The motivation behind this post is that I struggled to previously find any examples that really tied these technologies together. Yet, these technologies really represent the building-blocks for many web applications that want to leverage the vast array of Google API services. 

To keep things simple, the demo will simply allow the user to login via a Google domain; authorize access to the user's Google Docs services; and display a list of the user's Google Docs Word and Spreadsheet documents. Throughout this tutorial I do make several assumptions about the reader's expertise, such as a pretty deep familiarity with Java.

Overview of the Flow

Before we jump right into the tutorial/demo, let's take a brief look at the navigation flow. 
While it may look rather complicated, the main flow can be summarized as:
  1. User requests access to listFiles.jsp (actually any of the JSP pages can be used).
  2. A check is make to see if the user is logged into Google. If not, they are re-directed to a Google login page -- once logged in, they are returned back. 
  3. A check is then made to determine whether the user is stored in the local datastore. If not, the user is added along with the user's Google domain email address.
  4. Next, we check to see if the user has granted OAuth credentials to the Google Docs API service. If not, the OAuth authentication process is initiated. Once the OAuth credentials are granted, they are stored in the local user table (so we don't have to ask each time the user attempts to access the services).
  5. Finally, a list of Google Docs Spreadsheet or Word docs is displayed.
This same approach could be used to access other Google services, such as YouTube (you might display a list of the user's favorite videos, for example).

Environment Setup

For this tutorial, I am using the following:
  • Eclipse Indigo Service Release 2 along with the Google Plugin for Eclipse (see setup instructions).
  • Google GData Java SDK Eclipse plugin version 1.47.1 (see setup instructions). 
  • Google App Engine release 1.6.5. Some problems exist with earlier versions, so I'd recommend making sure you are using it. It should install automatically as part of the Google Plugin for Eclipse.
  • Objectify version 3.1. The required library is installed already in the project's war/WEB-INF/lib directory.
 After you have imported the project into Eclipse, your build path should resemble:

The App Engine settings should resemble:

You will need to setup your own GAE application, along with specifying your own Application ID (see the Google GAE developer docs). 

The best tutorial I've seen that describes how to use OAuth to access Google API services can be found here. One of the more confusing aspects I found was how to acquire the necessary consumer key and consumer secret values that are required when placing the OAuth request. The way I accomplished this was:
  1. Create the GAE application using the GAE Admin Console. You will need to create your own Application ID (just a name for your webapp). Once you have it, you will update your Application ID in the Eclipse App Engine settings panel that is shown above.
  2. Create a new Domain for the application. For example, since my Application ID was specified above as "tennis-coachrx", I configured the target URL path prefix as: You will see how we configure that servlet to receive the credentials shortly. 
  3. To complete the domain registration, Google will provide you an HTML file that you can upload. Include that file the root path under the /src/war directory and upload the application to GAE. This way, when Google runs it's check, the file will be present and it will generate the necessary consumer credentials. Here's a screenshot of what the setup looks like after it is completed:

Once you have the OAuth Consumer Key and OAuth Consumer Secret, you will then replace the following values in the com.zazarie.shared.Constant file:

final static String CONSUMER_KEY = "";
final static String CONSUMER_SECRET = "";

Whew, that seemed like a lot of work! However, it's a one-time deal, and you shouldn't have to fuss with it again.

Code Walkthrough

Now that we got that OAuth configuration/setup out of the way, we can dig into the code. Let's begin by looking at the structure of the war directory, where your web assets reside:

The listFiles.jsp is the default JSP page that is displayed when your first enter the webapp. Let's now look at the web.xml file to see how this is configured, along with the servlet filter which is central to everything.

The servlet filter called AuthorizationFilter is invoked whenever a JSP file located in the html directory is requested. The filter, as we'll look at in a moment, is responsible for ensuring that the user is logged into Google, and if so, then ensures that the OAuth credentials have been granted for that user (i.e., it will kick off the OAuth credentialing process, if required).

The servlet name of Step2 represents the servlet that is invoked by Google when the OAuth credentials have been granted -- think of it as a callback. We will look at this in more detail in a bit.

Let's take a more detailed look at the AuthorizationFilter.

AuthorizationFilter Deep Dive

The doFilter method is where the work takes place in a servlet filter.  Here's the implementation:

Besides the usual housekeeping stuff, the main logic begins with the line:

AppUser appUser = LoginService.login(request, response);

As we will see in a moment, the LoginService is responsible for logging the user into Google, and also will create the user in the local BigTable datastore. By storing the user locally, we can then store the user's OAuth credentials, eliminating the need for the user to have to grant permissions every time they access a restricted/filtered page.

After LoginService has returned the user (AppUser object), we then store that user object into the session (NOTE: to enable sessions, you must set sessions-enabled in the appengine-web.xml file):

session.setAttribute(Constant.AUTH_USER, appUser);

We then check to see whether the OAuth credentials are associated with that user:

if (appUser.getCredentials() == null) {

   session.setAttribute(Constant.TARGET_URI, request.getRequestURI());

   OAuthRequestService.requestOAuth(request, response, session);
} else 

If getCredentials() returns a null, the OAuth credentials have not already been assigned for the user. This means that the OAuth process needs to be kicked off. Since this involves a two-step process of posting the request to Google and then retrieving back the results via the callback (Step2 servlet mentioned above), we need to store the destination URL so that we can later redirect the user to it once the authorization process is completed. This is done by storing the URL requested into the session using the setAttribute method.

We then kick off the OAuth process by calling the OAuthRequestService.requestOAuth() method (details discussed below).

In the event that if getCredentials() returns a non-null value, this indicates that we already have the user's OAuth credentials from their local AppUser entry in the datastore, and we simply add it to the session so that we can use it later.

LoginService Deep Dive

The LoginService class has one main method called login, followed by a bunch of JPA helper methods for saving or updating the local user in the datastore. We will focus on login(), since that is where most of the business logic resides.

The first substantive thing we do is use Google UserService class to determine whether the user is logged into Google:

UserService userService = UserServiceFactory.getUserService();

User user = userService.getCurrentUser();

If the User object returned by Google's call is null, the user isn't logged into Google, and they are redirected to a login page using:


If the user is logged (i.e., not null), the next thing we do is determine whether that user exists in the local datastore. This is done by looking up the user with their logged-in Google email address with appUser = findUser(userEmail). Since JPA/Objectify isn't the primary discussion point for this tutorial, I won't go into how that method works. However, the Objectify web site has some great tutorials/documentation.

If the user doesn't exist locally, the object is populated with Google's email address and created using appUser = addUser(userEmail). If the user does exist, we simply update the login timestamp for logging purposes.

OAuthRequestService Deep Dive

As you may recall from earlier, once the user is setup locally, the AuthorizationFilter will then check to see whether the OAuth credentials have been granted by the user. If not, the OAuthRequestService.requestOAuth() method is invoked. It is shown below:

To simplify working with OAuth, Google has a set of Java helper classes that we are utilizing. The first thing we need to do is setup the consumer credentials (acquiring those was discussed earlier):

GoogleOAuthParameters oauthParameters = new GoogleOAuthParameters();

Then, we set the scope of the OAuth request using:

Where Constant.GOOGLE_RESOURCE resolves to When you make an OAuth request, you specify the scope of what resources you are attempting to gain access. In this case, we are trying to access Google Docs (the GData API's for each service have the scope URL provided).  Next, we establish where we want Google to return the reply.


This value changes whether we are running locally in dev mode, or deployed to the Google App Engine. Here's how the values are defined in the the Constant interface:

// Use for running on GAE
//final static String OATH_CALLBACK = "";

// Use for local testing
final static String OATH_CALLBACK = "";

When then sign the request using Google's helper:

GoogleOAuthHelper oauthHelper = new GoogleOAuthHelper(new OAuthHmacSha1Signer());

We then generate the URL that the user will navigate to in order to authorize access to the resource. This is generated dynamically using:

String approvalPageUrl = oauthHelper.createUserAuthorizationUrl(oauthParameters);

The last step is to provide a link to the user so that they can navigate to that URL to approve the request. This is done by constructing some simple HTML that is output using res.getWriter().print().

Once the user has granted access, Google calls back to the servlet identified by the URL parameter /authSub, which corresponds to the servlet class RequestTokenCallbackServlet. We will examine this next.

RequestTokenCallbackServlet Deep Dive

The servlet uses the Google OAuth helper classes to generate the required access token and secret access token's that will be required on subsequent calls to to the Google API docs service.  Here is the doGet method that receives the call back response from Google:

The Google GoogleOAuthHelper is used to perform the housekeeping tasks required to populate the two values we are interested in:

String accessToken = oauthHelper.getAccessToken(oauthParameters);
String accessTokenSecret = oauthParameters.getOAuthTokenSecret();

Once we have these values, we then requery the user object from the datastore, and save those values into the AppUser.OauthCredentials subclass:

appUser = LoginService.getById(appUser.getId());
appUser = LoginService.updateUserCredentials(appUser,
          new OauthCredentials(accessToken, accessTokenSecret));

In addition, you'll see they are also stored into the session so we have them readily available when the API request to Google Docs is placed.

Now that we've got everything we need, we simply redirect the user back to the resource they had originally requested:

RequestDispatcher dispatcher = req.getRequestDispatcher((String) req
dispatcher.forward(req, resp);

Now, when they access the JSP page listing their documents, everything should work!

Here's a screencast demo of the final product:

Hope you enjoyed the tutorial and demo -- look forward to your comments!

Sunday, May 6, 2012

Business Rules with Drools Intro

I've created an introductory presentation on the use and benefit of using a rules engine/business rule management system.  It uses JBoss Drools for the samples. I will be following up shortly with a more detailed presentation on how to use Drools integration so that you may explose your business rules as a web service API.

Monday, February 27, 2012

Using Apache Camel to Manage Amazon EC2 Startup/Shutdown

A lot of you are likely familiar with Amazon's EC2 cloud service. It's a "platform-as-a-service" solution that allows you to rent, on-demand, virtual machines (VMs). Those VM instances can run a variety of OSs, but the most common are Linux-based distros. Amazon offers a whole variety of VM types, depending upon your need. They range from high-performance VMs, to those sporting a huge amount of RAM.

While the lower-end Amazon instances are very cost-effective, they can still get rather pricey over time. In many cases, you likely don't need them running all-the-time. Using Amazon's EC2 API, you can programmatically manage just about any aspect of the VM, from startup to shutdown.

My son Owen recently has been using an EC2 instance to host some popular games such as Mindcraft. By hosting it himself, he can make modifications to the game logic, and apply other mods (apparently). However, since he's in school during the day, the instance really doesn't need to running all the time.

Given that, I had started to just write some Java routines that leverage the Amazon API to startup or shutdown his EC2 instance. I was then going to use Quartz to provide the scheduling capabilities. This would obviously work fine. However, I figured it might be a whole lot easier to use Apache Camel to manage the lifecycle.   In this post, I'll describe how I did that -- here's a link to the Eclipse project (you don't have to use Eclipse to run the project -- you can just unzip it as well).


In order to follow this example, I'm pretty much assuming you're pretty familiar with Java and related tools such as Maven and Eclipse. To run the project yourself, you'll need to have Java and Maven installed. While recommended, you really don't need Eclipse if you just want to run everything as-is from the command line.

Also, since we're using this project to start/stop Amazon EC2 instances, I also presume you have an EC2 account setup with an EC2 instance available to use.

To use the example, all you have to do is unzip the project into a folder of your choice. If you have Eclipse configured with the Maven plugin, you can simply import it -- within Eclipse choose File->Import->Existing Projects into Workspace. Then, choose the option Select Archive File, and select the zip file.

Quick Intro to Camel

While this article isn't intended as a introduction to Camel (I will actually have a future post on this), a brief intro is necessary. Camel can be described as a messaging orchestration and routing product. It shares many similar characteristics to a conventional ESB, insofar as it contains a wealth of different protocol adapters. However, it's a lot lighter-weight than an ESB, and doesn't have it's own container in which it runs. As with most ESBs, you can also add your own components to extend the out-of-the-box functionality.

There are two mains ways in which you can configure Camel. Once is using the Java DSL, and the other is the XML-based Spring DSL. We're going to use the Spring approach for this project, since that's the one I generally prefer (is also a lot easier to learn if you don't know Java).

Here's a simple example of a configuration that produces the customary "Hello World" output to the console:

<beans xmlns=""

    // namespaces not shown >

    <camel:camelContext xmlns="">

        <!-- this route will run a remote request against an ec2 instance -->




                <camel:constant>Hello World!</camel:constant>


            <camel:to uri="stream:out"/>




For those of you who know Spring, the above should seem pretty familiar. The top-level element is the typical beans element, which has associated with it a bunch of namespace definitions (not shown, but this simple example can be found in the camel-spring-hello-world.xml file in the project).  The most relavent part for our discussion starts with the camel:route element. That's where we define the route that the message will traverse. The message is initiated using the camel:from element's @uri definition. In this case, we're using the Camel Timer component (there are something like 80 components in Camel). The part which follows the timer: defines the frequency by which a message will be initiated (kicked-off). In this case, it will start generating messages every 10 milliseconds following startup based upon the setting in the @period element. However, because @repeatCount is set to 1, it will only generate a single message.

Most of the time, the camel:from element is used poll or periodically call a remote service such as an FTP site (for periodically scanning for files) or some HTTP service. In this case, only an empty message is being generated. So, the next statement in the route defines a message body. In this case, it's the proverbial "Hello World!". That is done using the camel:setBody element. Since we're assigning it a static value, we use child node camel:constant to set the static value (values can also be set more dynamically using expressions, which we'll see later).

Lastly, we're using the stream component to output the results (will print "Hello World!")  to the command line. If you want to run this very simple demo, edit the pom.xml file located at the root directory so that the configuration element within the project/build/plugins/plugin element for the org.apache.camel resembles:

Then, run at the command line, enter  mvn camel:run. This will launch camel in an embedded type fashion and run the route.

Moving on to the EC2 Example

With that basic overview in mind, let's dive into the real problem we are trying to solve -- using Camel to schedule the starting and stopping of an Amazon EC2 instance. 

Camel comes out-of-the-box with some existing integrations to Amazon's web services. However, there isn't one specifically for EC2. However, Amazon does have a Java API that makes it pretty straightforward to use. We'll just need to write a little Camel component using that to satisfy our needs. 

In Camel, there are a few different ways you can extend it's functionality. We'll demonstrate one of the easiest -- just using a plain old Java object (POJOs). I'm calling the Java bean EC2Controller. We will define 3 methods -- one method for starting the EC2 instance (startInstance), one for stopping the EC2 instance (stopInstance), and one general purpose method for just setting up the necessary communication configuration (setup). 

Let's start with the setup method:

private void setup() throws Exception {
   try {
     credentials = 
           new PropertiesCredentials(EC2Controller.class.getResourceAsStream(
   } catch (IOException e1) {
      System.out.println("Credentials were not properly entered into" +
      throw new Exception (e1.getMessage());

Not really much happening here - - just trying to load in the AWS credentials which must be present in the file referenced above (NOTE: you will need to update this file with your own AWS credential settings).

The startInstance method isn't terrible complicated either. Time doesn't permit me to go through what each of the Amazon API calls is doing, but I think you'll see it's pretty easy to follow-along (I've added some annotations to the code).

public void startInstance(@Body String body, Exchange exchange) throws Exception {
   Log.debug("Entering EC2Controller.startInstance(). " + 
             "Processing Ec2 Instance: " + body);
   // covered this above
   // Create the AmazonEC2Client object so we can call various APIs. 
   // The credentials class variable gets populated in the setup method.
   AmazonEC2 ec2 = new AmazonEC2Client(credentials);

   // We'll use this EC2 object to determine whether the EC2 instance is already 
   // running
   DescribeInstanceStatusRequest statusRequest 
                 = new DescribeInstanceStatusRequest();
   ArrayList instances = new ArrayList();
   // The message body, passed by camel, will contain the EC2 instance we 
   // are working with
   Log.debug("Checking server status");
   // This will actually run the request now wrapping Amazon's web services
   DescribeInstanceStatusResult result = 
   // If the result contains a status, that means it's already running. 
   // Otherwise, start it
   if (result.getInstanceStatuses().size() > 0) {
      exchange.getOut().setBody("Camel Start Instance Status: " + 
                             "Instance already running...not started");
   } else { 
      Log.debug("Starting ec2 instance: " + body);
      // This EC2 object is used to start an EC2 instance.
      StartInstancesRequest startRequest = new StartInstancesRequest(instances);
      // Sends the web services command to Amazon AWS
      // this new message in the body will be sent via email
      exchange.getOut().setBody("Camel Start Instance Status: Ec2 Instance " + 
                                  body + " started");

I think for the most part, the code is pretty self-explanatory, if you are already familiar with Amazon's EC2 concepts. As you will see in a moment, we pass the Amazon EC2 instance Id in the Camel message body. The method parameter annotation, @Body, is a Camel annotation, and based upon that, Camel will automatically try and pass the appropriate object based upon the annotation used (Exchange is also passed automatically by Camel -- since that's a native Camel object, it doesn't require the annotation).

With the EC2 instance id at hand, we then populate that into an ArrayList. This is required because the Amazon EC2 API calls often require a list since the calls can support performing activities on multiple instances at once. We use the Amazon API's to determine whether the instance in question is already running, and if not, we use the StartInstanceRequest to fire it up.

The stopInstance method is actually very similar, but we instead use the StopInstanceRequest Amazon API call to stop the running instance. Pretty easy, huh?

Now that we've got our custom Java bean written to interface with Amazon, let's wire it all up in Camel.

Camel Setup/Configuration

Before we delve into the route configuration, which is where all the fun resides, let's first look at some of the initial setup within the camelContext element (we're working with the camel-context.xml file here, located under src/main/resource/META-INF/spring subdirectory.). This is where you can specify property files that need to be referenced, exception handling logic, and, as a convenience, your endpoint URIs. Here's what we have:

<camel:camelContext xmlns="">

   <camel:propertyPlaceholder id="props" location="" />
   <!-- define quartz endpoints -->
   <camel:endpoint id="Quartz.Start" uri="quartz://startEc2?cron=0+0+08+*+*+?"/>
   <camel:endpoint id="Quartz.Stop" uri="quartz://stopEc2?cron=0+0+20+*+*+?"/>

   <camel:endpoint id="RunOnce" uri="timer://myTimer?fixedRate=true&amp;period=10&amp;repeatCount=1"/> <!--  for testing -->









        <camel:to uri="seda:SendStatusEmail"/>


The first thing we do is define a location for a property file. As we'll see below, this is simply used to avoid some hard-coding of values in the Camel configuration (some is unfortunately required).  This is done using the camel:PropertyPlaceholder element, which specifies a location of a properties file in the classpath. The next thing we do is define some endpoints. The first two represent Quartz component endpoints that define schedules for starting and stopping the instances. The Camel Quartz configuration has the details on how to interpret the configuration strings, but suffice it to say that the Quartz.Start will kick off a message at 8am, and Quartz.Stop will do the same at 8pm.  While we could include these directly within the @uri attribute of the camel:from element, centralizing the endpoints makes maintenance a bit more straightforward with it done in this fashion (although I'm a bit loose when following that pattern myself, as you likely have noticed).

Camel actually comes with two scheduling type services/components, and we've actually seen both of them here. The Timer component, which we use for defining the endpoint called RunOnce, is pretty bare-bones in functionality (this endpoint is used for testing only -- handy for kicking off a message right after Camel starts without having to twiddle with the cron settings used in the Quartz endpoints). If you need more advanced scheduling capabilities, the Quartz component is the way to go (gives you functionality like Unix/Linux cron).

As for the exception handling, all we're basically doing here is trapping for any exceptions that occur when the process is run, with an email ultimately being sent (very basic error handling, to be sure) in the event of an exception. We'll talk a little about the route we use for sending the email (the  seda:SendStatusEmail uri).

Now that we have the basic housekeeping stuff in place, let's look at the route configuration.

Camel Route Configuration for Starting an EC2 Instance

We'll actually define two main routes -- one for starting up the EC2 instance, and the other for shutting it down.  They are both very simple -- a Quartz component will fire off a message at the preordained time, which will then kickoff the Java bean we created for interfacing with Amazon. Finally, we'll send an email out with the status update.

Let's look at the startup route first:


   <camel:from uri="ref:Quartz.Start" />




   <camel:bean ref="Controller" method="startInstance"/>

   <camel:to uri="seda:SendStatusEmail"/>

   <camel:to uri="stream:out" />


The camel:from element, which defines how the message will be trigged in the route, references the Quartz.Start endpoint we defined a bit above.  When that endpoint fires the message, we then set the Camel message body to equal the value associated with the property EC2.INSTANCE (as you recall, the property file was loaded using the propertyPlaceholder statement).  In other words, we're setting the Camel message body to be the Amazon EC2 instance Id we want to start.  Then, the bean identified as Controller is called. This invokes the EC2Controller Java bean we created earlier that interfaces with Amazon's AWS. This bean is configured in the beans element of this same XML file as:

<bean id="Controller" name="Controller" class="zazarie.camel.beans.EC2Controller" />

Lastly, we make that curious called to seda:SendStatusEmail. This is actually another defined endpoint, and it's used for sending out the emails. Why do this? For one, it allows us to easily re-use that endpoint, since we that functionality in a few places. Why the seda protocol? That is an asynchronous protocol used by Camel, which is perfect for this, because it's really just a fire-and-forget piece of functionality (i,.e,. sending the email).

Here's how we define that seda endpoint:


   <camel:from uri="seda:SendStatusEmail"/>

   <camel:setHeader headerName="to">

   <camel:setHeader headerName="subject">

   <camel:to uri="smtps://"/>


So, when you saw the statement earlier:

<camel:to uri="seda:SendStatusEmail"/>

This was invoking this route. This route first sets the to and subject headers using properties from the property file, and then calls the smtps (Mail component) protocol to send the outbound message. For some reason, Camel doesn't seem to allow certain of the properties, such as username and password, to be set using the camel:setHeader mechanism (which is a minor bummer).

We've now covered how the route works that starts the Amazon EC2 instance, let's look at the one use for stopping it -- it's pretty much identical.

Camel Route Configuration for Starting an EC2 Instance

The route used to stop the EC2 instance is actually identical, with one minor change -- we invoke the stopInstance method of the Controller bean:

<camel:bean ref="Controller" method="stopInstance"/>

Setup and Run

After you've extracted the project into some directory location, you will need to do a few things to get it ready for your environment.
  1. Edit the file. You will want to enter in your own email address where you want the status messages sent, and change the EC2 instance Id to the one you want to manage.
  2. In the camel.context file, you will want to edit the smtps URI so that it contains the SMTP server you are using to send the outbound status message. 
  3. Edit the file to use your own Amazon EC2 security credentials.
  4. To setup it up for testing, use the RunOnce endpoint in lieu of the Quartz.Start or Quartz.Stop @ref's in the routes used for starting and stopping EC2. 
Finally, we are ready to run it using mvn camel:run

Additional Functionality

In the camel-context.xml file, I also included an additional route that demonstrates how you can run some remote commands on an an EC2 instance. This is the commented out route that uses the exec Camel component. One of the common requirements before starting or stoping an EC2 instance is to run some system commands. This might be to cleanly shutdown a database, or issue some command to start a service upon startup. I had attempted to use Camel's built-in SSH component, but never could get it working correctly. However, it's pretty easy to just invoke a .bat (Windows) or .sh (Linux) script that uses something like command line SSH or Putty to invoke a remote command. The Mina component could also be used for this purpose (it's very slick), but I didn't have time to fully explore that.

Other areas that this example could be enhanced would be to demonstrate how Camel could be used to periodically poll an EC2 instance for load/usage. Then, if the load exceeds a certain high-water mark, Camel could kick off instantiating a new EC2 instance.

Once thing that I haven't had a chance to tackle yet is how to run this routine under a Tomcat container. Having to run it through through the command line is good for testing, but not for a production environment, you likely want to configure it as a war file. That's not hard to do -- maybe I'll add a note on that in the near future if folks are interested.

Let me know your feedback!

jeff davis, Monument Co

References -- Project Source Code.

Friday, February 24, 2012

Back on Blogger

Sorry for the extended absence. I had previously been using a wordpress blog hosted through one of my Amazon EC2 instances, but decided that was getting a little costly. So, I'm back on blogger. Looks for some new blog entries soon. I'll be covering such topics as Apache Camel, Google GWT and GAE (Google App Engine), and a variety of other open source topics.