Problem following the Micronaut quick start

Here is something that took me far too long to figure out and will hopefully help anyone that finds themselves in a similar situation. It probably causes similar problems way beyond Micronaut, but I mention it here because that is where I encountered it.

I am running Windows 10 Profressional and using a Java 8 JDK. I had similar problems whether using OpenJDK or the Oracle reference distribution.

Once I installed Micronaut I would run “mn” to get to the CLI as suggested in the Quick Start and would inevitably be met with:

| Error Error occurred running Micronaut CLI: the trustAnchors parameter must be non-empty (Use –stacktrace to see the full trace)

I looked at the batch script and turned on its DEBUG (set DEBUG=on – I still have not figured out how to use –stacktrace as suggested in the message) which made it abundantly clear that it was blowing off trying to run the Java class io.moicronaut.cli.MicronautCli but really told me nothing more. It seemed clear it was having a problem downloading dependencies since that message seems to always come down to a SSLException in anything I googled and I could not imagine what else is might be trying to do.

Eventually I found something noting that OpenJDK had a bug that persists in the latest release of its Java 8 JDK (jdk8u172-b11) and also exists in the Oracle distribution apparently where the cacerts file (under jre/lib/security) does not allow much. I copied the file over from the latest OpenJDK Java distribution (101KB versus 1KB) and suddenly my dependencies downloaded and I was in!

When a POST becomes a GET

I spent more time than I had a reason to trying to figure out why a HTTP POST I was making was arriving at my Grails controller without its content. It turns out the reason is that I was referencing a non-SSL port. Because of the CONFIDENTIAL in web.xml this was redirected to the SSL port (redirectPort in Tomcat) which causes the browser to redirect and changes the POST to a GET. (org.apache.catalina.filters.RequestDumperFilter was a lifesaver in debugging this, BTW.) Submitting the POST to the actual SSL port made it all better.

Normally this is not an issue which is why it caught me. A web request is normally initiated with a GET and if it is a secure (SSL) web site you are probably going to be guided through a login first as well. But this is an internal web site and I was submitting a SOAP request; negotiating a redirect to HTTPS while trying to POST a SOAP request to a web service is an unhappy thing.

All I was trying to do was send the SOAP request along to the real service anyhow, so a prototype Grails controller for doing this (using the wslite plugin) is:

import wslite.soap.SOAPClient
import wslite.soap.SOAPResponse

class SoapController {
  def doSomething(url) {
    SOAPClient client = new SOAPClient(url)
    String requestText = request.reader.text
    String soapAction = request.getHeader('SOAPAction')
    SOAPResponse soapResponse = client.send(SOAPAction: soapAction, requestText)
    render soapResponse.text
  }
}

Groovy, indy jars, and Unsupported major.minor version

I have an Ant build for my Groovy project and I was trying to upgrade it from 1.7.5 to 2.0.8 to keep in step with my Grails projects. We have to run using Java 6 for now.

So imagine my surprise when I ran the build that worked for Groovy 1.7.5 using Java 6 against a Groovy 2.0.8 install and got this.

java.lang.UnsupportedClassVersionError: org/codehaus/groovy/ant/Groovyc : Unsupported major.minor version 51.0

I have been pulling my hair out for about 24 hours trying to get past this. Well, I finally looked at the contents of the classpathref getting passed into the taskdef for the groovyc task and found the problem.

The taskdef looks like this.

    <taskdef name="groovyc"
        classname="org.codehaus.groovy.ant.Groovyc"
        classpathref="library.groovy.classpath"/>

The path is defined like so.

    <path id="library.groovy.classpath">
        <fileset dir="${project.groovy.home}/embeddable/" />
    </path>

Well guess what the path resolves to?

GROOVY_HOME/embeddable/groovy-all-2.0.8-indy.jar:GROOVY_HOME/embeddable/groovy-all-2.0.8.jar

This subversive indy.jar file has an org.codehaus.groovy.ant.Groovyc class too, only it is compiled with Java 7! I changed my path definition to the following and all is now well.

    <path id="library.groovy.classpath">
        <fileset dir="${project.groovy.home}/embeddable/" excludes="*-indy.jar" />
    </path>

I googled everything I could think of and could not turn up anything on the error that pointed me in the right direction. I am posting this in hopes it does someone else some good.

GroovyRowResult as a HashMap

A row (groovy.sql.GroovyRowResult) implements the Map interface but behaves differently than a HashMap (for example) in that it throws:

groovy.lang.MissingPropertyException: No such property: xyz for class: groovy.sql.GroovyRowResult

A HashMap would just return null of course.

This method will return a true HashMap for a GroovyRowResult.


    private Map rowAsMap(Map row) {
        def rowMap = [:]
        row.keySet().each {column ->
            rowMap[column] = row[column]
        }
        return rowMap
    }

Grails 2.0.4 Tomcat plugin and JNDI environment variables

I had an approach that worked for Grails 1.3.5 but had to revisit it with Grails 2.0.4 and the new Tomcat plugin. The primary reason it had to change was that the script I made my modification to has now moved into compiled code.

To review the situation:

We have a number of environment variables in our web applications that control how our applications behave. For instance, there is a variable that is displayed at the top of the pages (outside of production) that identifies what instance you are in.

 

When we deploy our war file into a Tomcat instance this is controlled by the conf/context.xml file like so.

 

<Environment name="appServerInstance" value="My server" type="java.lang.String" override="false"/>

The challenge was to set this up in development mode using grails run-app.

Since I could not influence the way that grails.naming.entries in Config.groovy is handled without modifying compiled code I added a new setting to Config.groovy instead. I am calling it grails.naming.environments (although this does not seem right since they are really naming entries too). As with grails.naming.entries it references a Map of environment names, each one associated with a Map of attributes. An example of this is:

grails.naming.environments = [
        "appServerInstance": [
                type: "java.lang.String",
                value: "My server",
                override: "false"
        ]
]

The way I make use of it is through the _Events.groovy script in the eventConfigureTomcat closure.

import org.codehaus.groovy.grails.commons.ConfigurationHolder

eventConfigureTomcat = { server ->
    processEnvironments(server)
}

private def processEnvironments(server) {
    def config = ConfigurationHolder.config
    def envs = config?.grails?.naming?.environments

    if (envs instanceof Map) {
        envs.each { name, cfg ->
            if (cfg) {
                addEnvironment(server, name, cfg)
            }
        }
    }
}

private addEnvironment(server, name, envCfg) {
    if (!envCfg["type"]) {
        throw new IllegalArgumentException("Must supply a environment type for JNDI configuration")
    }
    def context = server.host.findChildren()[0]
    def env = loadInstance(server, 'org.apache.catalina.deploy.ContextEnvironment')
    env.name = name
    env.type = envCfg.type
    env.description = envCfg.description
    env.override = envCfg.override as boolean
    env.value = envCfg.value

    context.namingResources.addEnvironment env
}

private loadInstance(def server, String name) {
    server.class.classLoader.loadClass(name).newInstance()
}

Then you can access it in the code just like the entry in the Tomcat context.xml.

new InitialContext().lookup('java:comp/env/appServerInstance')

One other thing: _GrailsBootstrap.groovy also makes use of grails.naming.entries, I believe for setting up the Shell or Console. I have not needed that yet so have not looked for a way to add the environment variables there.

Am I the only one surprised that Grails still has not added formal support for this capability?

Grails domain objects, nullable columns, and data binding

This tripped me up recently so I wanted to post my experience. The particular situation involves a Grails (1.3.5) domain object with attributes defined as nullable:true in the static constraints closure. Some of these attributes are represented by a String that should not be blank, meaning that if something non-null is assigned it should contain something other than blanks.

The issue here is that when you use data binding – new MyDomain(someMap) or myDomain.properties = someMap – an empty String in the map is equated to a null if the receiving attribute is nullable. There is a good reason for this since data binding is expecting input from a web page which is always in the form of a String, so if it is an empty String being mapped to a Date (for example) it should really be bound as null.

It gets a little counterintuitive in the situation I started with – a nullable, nonblank String as the target attribute. Basically, you can never end up with a blank in the attribute through data binding so the blank:false constraint is largely pointless.

From an application standpoint then all is well and good. If you decide to write a test of the validation on the object you may get tripped up as I was. You can create a domain object using data binding – new MyDomain([nullableNonblank: ”]) – but when you validate() it you will not have errors…because the value is null, not blank.

FWIW, you can test this if you really want by subsequently assigning the value without using data binding – myDomain.nullableNonblank = ” – and then validating. This suggests that you are using something other than data binding in your application for assigning values to the object, otherwise the test (and the blank:false constraint itself) are pointless.

Groovy with Gradle and Artifactory

I am trying to change my Ant builds to Gradle for various reasons. Gradle has a lot of advantages (just ask Hans Dockter if you don’t agree :-)), so I did not want to simply convert my Ant builds or run them wrapped by Gradle. I wanted to do it right.

Well, I’m not sure whether I have or not, but I have learned a lot that could be helpful to others, or even to me when I get separated from this effort by some other distraction and have to come back to it.

For reference, I am using Groovy 1.7.5 and Gradle 1.0-milestone-1. Gradle 1.0-milestone-3 is currently available and hopefully 1.0 will be released soon so that I can go live with a real release. Unfortunately milestone 3 (and 2) has issues running on AIX which is my build environment (http://issues.gradle.org/browse/GRADLE-1479).

Dependency Management

We were not using dependency management which had us traveling down the road to jar hell, so this is one area that I had to come to terms with. I did not know much about it, and really do not still, but I got something working that appears to take care of my needs; hopefully that giddy feeling will last at least a little while.

Artifactory

Artifactory seemed to be a popular product to get started with, due in no small part to the fact that it has a free (“Open Source”) version; I am running Artifactory 2.3.4. There is also a plugin for Gradle which is helpful; I am using version 2.0.6+ (a snapshot build) of that, which fixes several problems I ran into (https://issues.jfrog.org/jira/browse/GAP-93, https://issues.jfrog.org/jira/browse/GAP-103). For completeness, I also inspired https://issues.jfrog.org/jira/browse/RTFACT-4329.

Out of the box Artifactory has a number of repositories defined. The repositories are of three different kinds – local, remote, and virtual; I’m not fired up about these names but that is what Artifactory calls them so I might as well too.

  • A local repository holds artifacts that you control, whether they are your code potentially published there by your build(s), or they are someone else’s code that you need to manage yourself.
  • A remote repository is related to some other accessible repository (such as Maven Central) and caches artifacts from the related repository when they are requested.
  • A virtual repository is a grouping of local and/or remote repositories that can be read from as one, essentially defining a new namespace that spans several others.
Local Repositories

There are six that come defined with Artifactory, but since I only use two I will mostly limit my discussion to them. Note that local repositories are really the only ones you can deploy artifacts to.

  • libs-release-local is where I publish my artifacts. So as part of my Gradle build I want to put significant resulting artifacts here.
  • ext-release-local is where I deploy artifacts that are provided by someone else but for which there is no existing remote repository I can use to resolve them. For me that is (so far) only the DB2 driver jar file, as I could not find a repository out there that I could retrieve it from.

There is also a plugins-release-local repository whose use is not immediately obvious to me. And for each of these “-release-local” repositories there is a “-snapshot-local” repository, which is presumably used for sticking nightly builds or similar “snapshots” that are supported but need to be isolated from approved releases. The difference is the setting of “Handle Releases” and “Handle Snapshots” in the repository settings.

Okay, so I use these two local repositories. I bet it would be useful to know how you get stuff into them!

  • libs-release-local receives releases through my Gradle build when the artifactoryPublish task is run. I will go into more depth later in this post, but for now it is worth noting that this is a task introduced by the Artifactory plugin in its “publish” closure.
  • ext-release-local receives artifacts most directly using the “Deploy” tab of the Artifactory console – http://{host}[:{port}]/artifactory/webapp/deployartifact.html. When you select the artifact and press “Upload!” you then select the local repository you want to receive it (ext-release-local for me) and then give it the target path you want to use for it under that repository. Note that if you select “Deploy as Maven Artifact” the target path will not be editable, but will be determined by the entries you make for the various Maven attributes. When you are done press “Deploy Artifact” and it should return with a message indicating that it was deployed successfully and the path to the deployed artifact.

If you did not deploy it as a Maven Artifact and have a POM deployed or generated for it you will need to go through the same process to deploy the configuration file, such as an Ivy XML file.

Remote Repositories

Remote repositories are at the heart of dependency management. If a repository already exists with the dependency you have (and most do) then you just need to make the reference to it and you have a proxy to it in Artifactory. It is a proxy in that it should be empty initially; when you retrieve a dependency Artifactory brings it down into the remote repository cache (if it is not there already) and then hands it to you.

Every remote repository automatically gets a second name space, so where you have the “repo1” remote repository you also have the “repo1-cache” repository. The latter can only be used for retrieval; it will not cause something that is not already in the cache to be downloaded. A reference to the former will cause the artifact(s) to be brought down to the cache.

It is worth noting that a remote repository can be flagged as “Offline” which prevents the main name space from attempting to download artifacts not already in the cache. This represents an additional level of potential control for what gets into the repository. To me it is crazy to let stuff come into the cache from a remote repository without control, so I expect remote repositories to be offline under normal conditions.

Maven and Ivy

Much of my pain was related to mixing repository types. I quickly realized than almost everything in the way of a remote repository uses Maven. However Maven is one of the restrictive elements that Gradle is intended to liberate you from, so I had no interest in adopting it myself and preferred to use Ivy for artifacts under my control. At the same time I do not want my build to be concerned about how the (remote) repository is organized.

Experts readily advised that this is no problem using Artifactory. What took quite a while longer was to find out that it is no problem if you are using Artifactory Pro Version; it is a big problem if you are running the Open Source version as I was. So at this point I am nearing the end of my evaluation period for the Pro Version and am daunted by what I have yet to try that I will need in order to replace all of my builds with Gradle while using Artifactory.

With the Pro Version Artifactory includes the “Repository Layouts” add-on, which allows a request in one form to be mapped to a different form based on the underlying local or remote repository layout. So for my local repositories I use the “gradle-default” layout – effectively Ivy – and for my remote (Maven) repositories I use the “maven-2-default” layout (or whatever is required). This seems to get me to the promised land.

One thing to note though is that the UI does not allow you to change the Repository Layout for an existing repository. This makes some sense if there are artifacts in the repository, at least in a form that conflicts with the new layout, but for me that was not the case; it is very confusing since the default repositories seem comprehensive as long as you want to use Maven, yet using Maven is dubious if you have a choice in the matter.

The “safe” option is to set up new repositories effectively replacing the default ones only with a different layout; I availed myself to the “dangerous” option of going into Admin -> Advanced -> Config Descriptor and modifying the <repoLayoutRef> elements for those repositories. It seemed to work fine, which was kind of novel in my experience to this point.

Gradle

I set up a simple test project, largely because Artifactory support kept asking for one to demonstrate the various bugs I was running into. It is called gradleart to represent the attempt to use Gradle with Artifactory. The project (such that it is) is written in Groovy.

gradleart

There is only one class in the project proper, and that is the very simple com.mydomain.gradle.NothingMuch.groovy located under src/groovy in the project.

package com.mydomain.gradle

import org.apache.log4j.*
import com.ibm.db2.jcc.DB2Driver

class NothingMuch {
  static {
    Logger.getLogger("NothingMuch").log(Priority.FATAL, "Just saying.")
    println "Just said, 'Just saying.'"
    DB2Driver.newInstance()
  }
}

In addition there is only one test class, and that is the very simple com.mydomain.tests.gradle.NothingMuchTests.groovy located under test/groovy in the project.

package com.mydomain.tests.gradle

import com.mydomain.gradle.NothingMuch

class NothingMuchTests extends GroovyTestCase {
  void testClassLoad() {
    try {
      NothingMuch.classLoader.loadClass('com.mydomain.gradle.NothingMuch')
    } catch (Throwable t) {
      assert false, "Failed to load class com.mydomain.gradle.NothingMuch with Throwable ${t}"
    }
  }
}

This leads to the build.gradle file.

apply plugin: 'groovy'
apply plugin: 'artifactory'

// "name" is not settable in the build script, so assert that it has the correct value instead.
// This actually gets set in settings.gradle as:
// rootProject.name = 'gradleart'
assert name == 'gradleart'

// The generated artifact is part of the "com.mydomain" group (or "organisation").
group = 'com.mydomain'

artifactory {

  contextUrl = 'http://artifactory.mydomain.com/artifactory'

  publish {
    repository {
      repoKey = 'libs-release-local'
      // The publishuser and publishpassword authentication credentials are set in gradle.properties.
      //    publishuser=someuser
      //    publishpassword=somepassword
      username = publishuser
      password = publishpassword
      ivy {
        ivyLayout = '[organization]/[module]/ivy-[revision].xml'
        artifactLayout = '[organization]/[module]/[revision]/[module]-[revision](-[classifier]).[ext]'
        mavenCompatible = false
      }
    }
    defaults {
      publishPom = false
    }
  }
  resolve {
    repository {
      repoKey = 'libs-release'
      ivy {
        ivyLayout = '[organization]/[module]/ivy-[revision].xml'
        artifactLayout = '[organization]/[module]/[revision]/[module]-[revision](-[classifier]).[ext]'
        mavenCompatible = false
      }
    }
  }
}

dependencies {
  groovy group: 'org.codehaus.groovy', name: 'groovy-all', version: '1.7.5'
  compile group: 'junit', name: 'junit', version: '4.8.1'
  compile group: 'log4j', name: 'log4j', version: '1.2.15'
  compile group: 'com.ibm', name: 'db2jcc', version: '9'
}

sourceSets {
  main {
    groovy {
      srcDir 'src/groovy'
    }
  }
  test {
    groovy {
      srcDir 'test/groovy'
    }
  }
}

uploadArchives {
  doFirst {
    assert version && version != 'unspecified', "Version not specified. Specify as:\n\tgradle -Pversion=1.0 [task]\n"
  }
  uploadDescriptor = true
}

Groovy, Java 6, and Classpath Wildcards

Java 6 introduced the capability of selecting all jar files in a directory for the classpath rather than having to list every jar file. With Groovy 1.7.1, Groovy caught up with this capability…sort of.

(It is a little odd to me that Groovy had to catch up to this in the first place actually; it seems like Groovy might have addressed this syntactic simplification in the spirit of doing the right thing before Java did.)

Starting with the Java 6 syntax, wilcard support is simply “*” following a directory reference, like “./lib/*”. So when running Groovy you might do this.

groovy -classpath "../batch.jar;../lib/*;." testClasspath

Note that the wildcard needs to be in quotes on Unix or the shell will take over exploding it. This does not work at all on Windows though. Just about anything you try along this line results in:

The syntax of the command is incorrect.

To avoid this error in Windows you must not put anything but the wildcard entry in the classpath and eliminate the quotes. This makes it a very limited solution at best.

groovy -classpath ../lib/* testClasspath

All of these Windows problems tie to the processing of the classpath by startGroovy.bat. While I do not have a solution to that, it seems to be avoidable by setting the CLASSPATH environment variable rather than trying to pass it into the groovy script.

set CLASSPATH=../batch.jar;../lib/*;.
groovy testClasspath

Why You Should Never Set GROOVY_HOME

I have half a dozen books on Groovy and spend much of my time on the Groovy web site, so it was a big surprise to me when I found out that setting the GROOVY_HOME system environment variable is a really bad idea. Everything I ever read said:

  1. Install
  2. Set GROOVY_HOME
  3. Optionally add the Groovy bin directory to your system path

(And actually from my recent thread on the Groovy user email list, this has been modified on the web site to suggest setting GROOVY_HOME is optional, but there is no real indication of what the tradeoffs are.)

If you ever need to run multiple versions of Groovy (as I did recently while in the process of upgrading Groovy to the version used by my upgraded Grails version) having GROOVY_HOME set is a real detriment. As it happens, when you run a script out of the Groovy bin directory it will set GROOVY_HOME based on where the script is that you are running only if GROOVY_HOME is not already set. If you set it and then run Groovy from a different version you are likely to get some classloader error because of the mismatch.

Exception in thread "main" java.lang.NoClassDefFoundError: org/codehaus/groovy/tools/GroovyStarter
Caused by: java.lang.ClassNotFoundException: org.codehaus.groovy.tools.GroovyStarter
        at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:276)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
        at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)

So it is only likely to cause you pain if you set GROOVY_HOME, unless you are intentionally trying to run the script of one version with the guts of another, or the internal Groovy mechanism for setting it is misbehaving somehow.

Grails 1.3.5 Tomcat plugin and JNDI environment variables

This approach will not work for the Tomcat plugin included in Grails 2.0.4 (and presumably all Grails 2 releases). I have written an updated approach for that.

We have a number of environment variables in our web applications that control how our applications behave. For instance, there is a variable that is displayed at the top of the pages (outside of production) that identifies what instance you are in.

When we deploy our war file into a Tomcat instance this is controlled by the conf/context.xml file like so.

<Environment name="appServerInstance" value="My server" type="java.lang.String" override="false"/>

In Grails applications using Jetty (the Jetty plugin) this is easy to control in the web-app/WEB-INF/jetty.xml file like so.

<Configure>
...
<New id="appServerInstance" class="org.mortbay.jetty.plus.naming.EnvEntry">
<Arg></Arg>
<Arg type="java.lang.String">appServerInstance</Arg>
<Arg type="java.lang.String">My server</Arg>
<Arg type="boolean">true</Arg>
</New>
...
</Configure>

Now that Grails uses Tomcat I have been looking for a way to control this in development mode – when using grails run-app from the command line.

The plugin documents a way of handling resource tags using grails.naming.entries in Config.groovy. After repeated attempts and some debugging it became clear that it would not handle environment tags. It is not too difficult to modify the plugin to accommodate environment (env-entry) tags however.

Cutting to the chase, JNDI resources are represented by instances of org.apache.catalina.deploy.ContextResource in the Tomcat naming resources, and these are handled by the plugin. JNDI environments are represented by instances of org.apache.catalina.deploy.ContextEnvironment, and these are not handled by the plugin.

Within the plugin directory (./plugins/tomcat-1.3.5 in my case) under directory src/groovy/org/grails/tomcat you will find TomcatServer.groovy. This is the class that gets things rolling, and the preStart() method is where the grails.naming.entries property is processed. It looked like this when I started.

private preStart() {
eventListener?.event("ConfigureTomcat", [tomcat])
def jndiEntries = grailsConfig?.grails?.naming?.entries

if(jndiEntries instanceof Map) {
jndiEntries.each { name, resCfg ->
if(resCfg) {
if (!resCfg["type"]) {
throw new IllegalArgumentException("Must supply a resource type for JNDI configuration")
}
def res = loadInstance('org.apache.catalina.deploy.ContextResource')
res.name = name
res.type = resCfg.remove("type")
res.auth = resCfg.remove("auth")
res.description = resCfg.remove("description")
res.scope = resCfg.remove("scope")
// now it's only the custom properties left in the Map...
resCfg.each {key, value ->
res.setProperty (key, value)
}

context.namingResources.addResource res
}
}
}
}

It expects a Map that looks something like this.

grails.naming.entries = [
"jdbc/ArtDB": [
type: "javax.sql.DataSource",
auth: "Container",
description: "Art data source",
url: "jdbc:db2://myserver:12345/MyDB",
username: "auser",
password: "apassword",
driverClassName: "com.ibm.db2.jcc.DB2Driver"
],
"jms/bananas": [
type: "org.apache.activemq.command.ActiveMQTopic",
description: "Fruit salad",
factory: "org.apache.activemq.jndi.JNDIReferenceFactory",
physicalName: "bananas"

]
]

My thought was that, since there are multiple types of naming resources each needing to be treated differently, why not support maps by type? So my propsed Config.groovy might contain this.

grails.naming.entries = [
"resources" : [
"jdbc/ArtDB": [
type: "javax.sql.DataSource",
auth: "Container",
description: "Art data source",
url: "jdbc:db2://myserver:12345/MyDB",
username: "auser",
password: "apassword",
driverClassName: "com.ibm.db2.jcc.DB2Driver"
],
"jms/bananas": [
type: "org.apache.activemq.command.ActiveMQTopic",
description: "Fruit salad",
factory: "org.apache.activemq.jndi.JNDIReferenceFactory",
physicalName: "bananas"

]
],
“environments” : [
“appServerInstance”: [
type: “java.lang.String”,
value: “My server”,
override: “false”
]
]
]

I thought it would be nice to support the original style that considers everything a JNDI resource too, and I was able to accomplish this as long as none of them are named the same as my expected resource type names (“resources” and “environments” in my implementation). After figuring out what was needed, modifying the code was not a big deal. Here is what I ended up with as a replacement for the old preStart method.

private preStart() {
eventListener?.event("ConfigureTomcat", [tomcat])
def jndiEntries = grailsConfig?.grails?.naming?.entries

if(jndiEntries instanceof Map) {
def envs = jndiEntries.remove('environments')
if (envs instanceof Map) {
envs.each { name, cfg ->
if (cfg) {
addEnvironment(name, cfg)
}
}
}
def ress = jndiEntries.remove('resources')
if (ress instanceof Map) {
ress.each { name, cfg ->
if (cfg) {
addResource(name, cfg)
}
}
}
jndiEntries.each { name, cfg ->
if(cfg) {
addResource(name, cfg)
}
}
}
}

private addEnvironment(name, envCfg) {
if (!envCfg[“type”]) {
throw new IllegalArgumentException(“Must supply a environment type for JNDI configuration”)
}
def env = loadInstance(‘org.apache.catalina.deploy.ContextEnvironment’)
env.name = name
env.type = envCfg.type
env.description = envCfg.description
env.override = envCfg.override as boolean
env.value = envCfg.value

context.namingResources.addEnvironment env
}

private addResource(name, resCfg) {
if (!resCfg[“type”]) {
throw new IllegalArgumentException(“Must supply a resource type for JNDI configuration”)
}
def res = loadInstance(‘org.apache.catalina.deploy.ContextResource’)
res.name = name
res.auth = resCfg.remove(“auth”)
res.scope = resCfg.remove(“scope”)
res.type = resCfg.remove(“type”)
res.description = resCfg.remove(“description”)
// now it’s only the custom properties left in the Map…
resCfg.each {key, value ->
res.setProperty (key, value)
}

context.namingResources.addResource res
}

There is room for improvement (it could be DRYer for sure) but it works and I have already spent too much time figuring this much out. I hope someone else finds this useful.

It was surprising to me that no one seemed to be missing this capability and that there was no readily available help out there regarding it.