Thursday, March 17, 2016

Grails MappingException: Missing type or column for column

I've been trying to upgrade an older project from grails 2.3.9 to grails 2.4.4.
I've done this sort of upgrade with many of my other projects without issues, but this time around I got stumped on the following error:

 MappingException: Missing type or column for column
The error message was not particularly helpful, since I know the domain class mapping was correct in a previous version of Grails. The way associations between domain classes, hasn't changed. So find the underlying cause wasn't easy and I wanted to share it with you in case you run into a similar problem.

 In my domain classes there many getters methods that had no corresponding field defined. For example, I had a getter called, getFoo()  but there was no corresponding field in the domain class called 'foo'. When using such getter methods in Grails, normally we should define a static list of transients as follows:


static transients = ['foo']

Once these transients were defined, then the error messages stopped


Wednesday, March 9, 2016

Grails decoding default html encoding

I was trying to embed a URL with ampersands '&' in my HTML page using the following expression:

GSP file
${url}


and since my default codec in Config.groovy was set to 'html':

Config.groovy
grails.views.default.codec = "html"

The ampersand was always being converted to

&

What I wanted was just the '&' character in unencoded format.

To do that we can use the raw() function as follows:

GSP file
${raw(url)}

References:
http://stackoverflow.com/questions/1337464/overriding-grails-views-default-codec-html-config-back-to-none

Wednesday, March 2, 2016

ElasticSearch - Failed bulk item: RemoteTransportException

I'm having issues trying to run a bulk import of millions of records into elasticsearch which results in the error:

Failed bulk item: RemoteTransportException

 I've tried several changes to the configuration of the elasticsearch.yml file but continue to get the above error. To review the status of each node I executed the following command:

curl -XGET http://ip_address:9200/_nodes/stats?pretty

To identify which node is causing the rejections, I looked for thread_pool.bulk stats for each node and I found, that one of nodes had a high rejection number:


      "thread_pool" : {
        "bulk" : {
          "threads" : 100,
          "queue" : 0,
          "active" : 0,
          "rejected" : 1482,
          "largest" : 100,
          "completed" : 121512
        },


I had a look at the memory profile by looking under jvm.mem stats:


        "mem" : {
          "heap_used_in_bytes" : 3336515992,
          "heap_used_percent" : 62,
          "heap_committed_in_bytes" : 5333843968,
          "heap_max_in_bytes" : 5333843968,
          "non_heap_used_in_bytes" : 55600912,
          "non_heap_committed_in_bytes" : 84336640,
          "pools" : {
            "young" : {
              "used_in_bytes" : 121153000,
              "max_in_bytes" : 279183360,
              "peak_used_in_bytes" : 279183360,
              "peak_max_in_bytes" : 279183360
            },
            "survivor" : {
              "used_in_bytes" : 21682640,
              "max_in_bytes" : 34865152,
              "peak_used_in_bytes" : 34865152,
              "peak_max_in_bytes" : 34865152
            },
            "old" : {
              "used_in_bytes" : 3193680352,
              "max_in_bytes" : 5019795456,
              "peak_used_in_bytes" : 3865208384,
              "peak_max_in_bytes" : 5019795456
            }
          }
        },

I could see that the node was running out of memory since it exceeded my memory configuration of 50% reserved for indexing using up 62%.

/etc/elasticsearch/elasticsearch.yml configuration


indices.memory.index_buffer_size: 50%
 

 After finding the node causing the problem, I  looked at the log files found under /var/log/elasticsearch folder I could see the exception:

Caused by: java.lang.OutOfMemoryError: Java heap space

[2016-02-29 02:35:39,599][WARN ][bootstrap                ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] This can result in part of the JVM being swapped out.
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited


Double check that the mlockall is true:
[root@webserver logs]# curl http://x.x.x.x:9200/_nodes/process?pretty
{
  "cluster_name" : "cpi",
  "nodes" : {
    "BPKQWgTFTtmT4uh2w7WRLw" : {
      "name" : "cpi1",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 28226,
        "mlockall" : true
      }
    },
    "0LVlWGZvSRu3E8lC6bVzfw" : {
      "name" : "cpi2",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 17308,
        "mlockall" : true
      }
    },
    "07kUt8PPShqMQ8-W_cigWA" : {
      "name" : "cpi3",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 20625,
        "mlockall" : true
      }
    }
  }
}


I modified the /etc/security/limits.conf and added the following lines:
elasticsearch - nofile 65535
elasticsearch - memlock unlimited
root - memlock unlimited


Also,
The number of conccurent threads importing into elasticsearch was reduced from 8 to 5



Thursday, February 18, 2016

Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter

Upon upgrading elasticsearch to 2.1.0 I tried to do an export by pulling in a lot of records in one go and got the following error:

Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter

I suppose the proper fix would be to modify the code to use the scroll API, but for a quick fix, you can use the following:

 curl -XPUT http://1.2.3.4:9200/index/_settings -d '{ "index" : { "max_result_window" : 1000000}}'

Reference:

https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html

Sunday, February 14, 2016

elasticsearch 2.1.0 and "Unable to get effective rights from ACL: Overlapped I/O operation is in progress" error

When starting elasticsearch 2.1.0 on Windows 7 I got the following error below:


Likely root cause: java.io.IOException: Unable to get effective rights from ACL: Overlapped I/O operation is in progress.

        at sun.nio.fs.WindowsFileSystemProvider.getEffectiveAccess(WindowsFileSystemProvider.java:344)
        at sun.nio.fs.WindowsFileSystemProvider.checkAccess(WindowsFileSystemProvider.java:397)
        at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:246)
        at org.elasticsearch.bootstrap.Security.addPath(Security.java:227)
        at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:191)
        at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:184)
        at org.elasticsearch.bootstrap.Security.configure(Security.java:105)
        at org.elasticsearch.bootstrap.Bootstrap.setupSecurity(Bootstrap.java:196)
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:167)
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)

When I checked the java version being used I found it was using jdk1.7.0_21

echo %JAVA_HOME%

Upgrading to jdk1.8.0_65 resolved the issue

Monday, January 18, 2016

VCFtools and Tabix installation on Linux (RHEL)

VCFtools installation


Download vcftools from here: https://github.com/vcftools/vcftools/zipball/master
 Unzip the downloaded package and proceed to install RHEL packages required to build vcftools using the following:

 


yum install autoconf
yum install automake
yum install gcc
yum group install “Development Tools”
yum install zlib
yum install zlib-devel

export PERL5LIB=/path/to/your/vcftools-directory/src/perl/
cd vcftools/
./autogen.sh
./configure
make
make install



The binary executables will be installed in the /usr/local/bin folder.
To add the executables to your path use the following:
export PATH=$PATH:/usr/local/bin


To set it permanently add the following line to the /etc/profile file just before it gets exported:
PATH=$PATH:/usr/local/bin:/usr/local/tabix

Tabix installation

(1) Go here to download the newest release.

(2) Extract the file:

tar xvjf tabix-0.2.6.tar.bz2

(3) Compile the program by typing make on the UNIX command line.
(4) Export the path by adding the following line to your .bashrc file, saving your .bashrc file, and typing source on the UNIX command line.  Note: path_to_tabix is the directory where tabix is installed.


export PATH=$PATH:/path_to_tabix/tabix-0.2.6

 

References:


https://vcftools.github.io/examples.html
http://genometoolbox.blogspot.com.au/2013/11/installing-tabix-on-unix.html