Thursday, March 17, 2016

Grails MappingException: Missing type or column for column

I've been trying to upgrade an older project from grails 2.3.9 to grails 2.4.4.
I've done this sort of upgrade with many of my other projects without issues, but this time around I got stumped on the following error:

 MappingException: Missing type or column for column
The error message was not particularly helpful, since I know the domain class mapping was correct in a previous version of Grails. The way associations between domain classes, hasn't changed. So find the underlying cause wasn't easy and I wanted to share it with you in case you run into a similar problem.

 In my domain classes there many getters methods that had no corresponding field defined. For example, I had a getter called, getFoo()  but there was no corresponding field in the domain class called 'foo'. When using such getter methods in Grails, normally we should define a static list of transients as follows:


static transients = ['foo']

Once these transients were defined, then the error messages stopped


Wednesday, March 9, 2016

Grails decoding default html encoding

I was trying to embed a URL with ampersands '&' in my HTML page using the following expression:

GSP file
${url}


and since my default codec in Config.groovy was set to 'html':

Config.groovy
grails.views.default.codec = "html"

The ampersand was always being converted to

&

What I wanted was just the '&' character in unencoded format.

To do that we can use the raw() function as follows:

GSP file
${raw(url)}

References:
http://stackoverflow.com/questions/1337464/overriding-grails-views-default-codec-html-config-back-to-none

Wednesday, March 2, 2016

ElasticSearch - Failed bulk item: RemoteTransportException

I'm having issues trying to run a bulk import of millions of records into elasticsearch which results in the error:

Failed bulk item: RemoteTransportException

 I've tried several changes to the configuration of the elasticsearch.yml file but continue to get the above error. To review the status of each node I executed the following command:

curl -XGET http://ip_address:9200/_nodes/stats?pretty

To identify which node is causing the rejections, I looked for thread_pool.bulk stats for each node and I found, that one of nodes had a high rejection number:


      "thread_pool" : {
        "bulk" : {
          "threads" : 100,
          "queue" : 0,
          "active" : 0,
          "rejected" : 1482,
          "largest" : 100,
          "completed" : 121512
        },


I had a look at the memory profile by looking under jvm.mem stats:


        "mem" : {
          "heap_used_in_bytes" : 3336515992,
          "heap_used_percent" : 62,
          "heap_committed_in_bytes" : 5333843968,
          "heap_max_in_bytes" : 5333843968,
          "non_heap_used_in_bytes" : 55600912,
          "non_heap_committed_in_bytes" : 84336640,
          "pools" : {
            "young" : {
              "used_in_bytes" : 121153000,
              "max_in_bytes" : 279183360,
              "peak_used_in_bytes" : 279183360,
              "peak_max_in_bytes" : 279183360
            },
            "survivor" : {
              "used_in_bytes" : 21682640,
              "max_in_bytes" : 34865152,
              "peak_used_in_bytes" : 34865152,
              "peak_max_in_bytes" : 34865152
            },
            "old" : {
              "used_in_bytes" : 3193680352,
              "max_in_bytes" : 5019795456,
              "peak_used_in_bytes" : 3865208384,
              "peak_max_in_bytes" : 5019795456
            }
          }
        },

I could see that the node was running out of memory since it exceeded my memory configuration of 50% reserved for indexing using up 62%.

/etc/elasticsearch/elasticsearch.yml configuration


indices.memory.index_buffer_size: 50%
 

 After finding the node causing the problem, I  looked at the log files found under /var/log/elasticsearch folder I could see the exception:

Caused by: java.lang.OutOfMemoryError: Java heap space

[2016-02-29 02:35:39,599][WARN ][bootstrap                ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] This can result in part of the JVM being swapped out.
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited


Double check that the mlockall is true:
[root@webserver logs]# curl http://x.x.x.x:9200/_nodes/process?pretty
{
  "cluster_name" : "cpi",
  "nodes" : {
    "BPKQWgTFTtmT4uh2w7WRLw" : {
      "name" : "cpi1",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 28226,
        "mlockall" : true
      }
    },
    "0LVlWGZvSRu3E8lC6bVzfw" : {
      "name" : "cpi2",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 17308,
        "mlockall" : true
      }
    },
    "07kUt8PPShqMQ8-W_cigWA" : {
      "name" : "cpi3",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 20625,
        "mlockall" : true
      }
    }
  }
}


I modified the /etc/security/limits.conf and added the following lines:
elasticsearch - nofile 65535
elasticsearch - memlock unlimited
root - memlock unlimited


Also,
The number of conccurent threads importing into elasticsearch was reduced from 8 to 5