Monday, April 11, 2016

ReferenceError: check is not defined

Meteor version: v1.3

I was following the tutorial on the meter website found here:

https://www.meteor.com/tutorials/blaze/security-with-methods

In the tutorial they recommend checking user input using a command as follows:

check(text, String);
However, when I go to run the application, I got the error message:

Exception while invoking method 'tasks.remove' ReferenceError: check is not defined
 No where in the tutorial did it mention that you had to add the 'check' package. So running the following command solved the problem:

meteor add check

Thursday, April 7, 2016

Meteor - TypeError: Cannot call method 'config' of undefined

Following the instructions on the meteor.js tutorial site:

https://www.meteor.com/tutorials/blaze/adding-user-accounts

I ran into the following error:

TypeError: Cannot call method 'config' of undefined
 There was an import statement that I had to add to a main.js file

import '../imports/startup/accounts-config.js';

It turns out I had added the  import line into the wrong 'main.js' file.
There are 2 main.js files: One in the client folder and the other in the server folder.


Once I moved the import statement to the client version, the app started working again


'npm' is not a Meteor command

I had previously installed Meteor months ago with the intention of giving it a go, but never got around to it. Fast forward to today, I tried to reinstall Meteor using the latest installer for version 1.3. Then I proceeded to follow the tutorials where one step required me enter the command:

meteor npm install

I received the following error:

'npm' is not a Meteor command. See 'meteor --help'.

It turns out that my version of Meteor was not actually updated by the installer. To check my meteor version I used the following command:

meteor --version

Which showed me "Meteor 1.2.0.2".

Solution

To upgrade meteor, run the following command:

meteor update
Reference: https://forums.meteor.com/t/installing-meteor-1-3-with-an-existing-1-2-left-in-place/20670

Thursday, March 17, 2016

Grails MappingException: Missing type or column for column

I've been trying to upgrade an older project from grails 2.3.9 to grails 2.4.4.
I've done this sort of upgrade with many of my other projects without issues, but this time around I got stumped on the following error:

 MappingException: Missing type or column for column
The error message was not particularly helpful, since I know the domain class mapping was correct in a previous version of Grails. The way associations between domain classes, hasn't changed. So find the underlying cause wasn't easy and I wanted to share it with you in case you run into a similar problem.

 In my domain classes there many getters methods that had no corresponding field defined. For example, I had a getter called, getFoo()  but there was no corresponding field in the domain class called 'foo'. When using such getter methods in Grails, normally we should define a static list of transients as follows:


static transients = ['foo']

Once these transients were defined, then the error messages stopped


Wednesday, March 9, 2016

Grails decoding default html encoding

I was trying to embed a URL with ampersands '&' in my HTML page using the following expression:

GSP file
${url}


and since my default codec in Config.groovy was set to 'html':

Config.groovy
grails.views.default.codec = "html"

The ampersand was always being converted to

&

What I wanted was just the '&' character in unencoded format.

To do that we can use the raw() function as follows:

GSP file
${raw(url)}

References:
http://stackoverflow.com/questions/1337464/overriding-grails-views-default-codec-html-config-back-to-none

Wednesday, March 2, 2016

ElasticSearch - Failed bulk item: RemoteTransportException

I'm having issues trying to run a bulk import of millions of records into elasticsearch which results in the error:

Failed bulk item: RemoteTransportException

 I've tried several changes to the configuration of the elasticsearch.yml file but continue to get the above error. To review the status of each node I executed the following command:

curl -XGET http://ip_address:9200/_nodes/stats?pretty

To identify which node is causing the rejections, I looked for thread_pool.bulk stats for each node and I found, that one of nodes had a high rejection number:


      "thread_pool" : {
        "bulk" : {
          "threads" : 100,
          "queue" : 0,
          "active" : 0,
          "rejected" : 1482,
          "largest" : 100,
          "completed" : 121512
        },


I had a look at the memory profile by looking under jvm.mem stats:


        "mem" : {
          "heap_used_in_bytes" : 3336515992,
          "heap_used_percent" : 62,
          "heap_committed_in_bytes" : 5333843968,
          "heap_max_in_bytes" : 5333843968,
          "non_heap_used_in_bytes" : 55600912,
          "non_heap_committed_in_bytes" : 84336640,
          "pools" : {
            "young" : {
              "used_in_bytes" : 121153000,
              "max_in_bytes" : 279183360,
              "peak_used_in_bytes" : 279183360,
              "peak_max_in_bytes" : 279183360
            },
            "survivor" : {
              "used_in_bytes" : 21682640,
              "max_in_bytes" : 34865152,
              "peak_used_in_bytes" : 34865152,
              "peak_max_in_bytes" : 34865152
            },
            "old" : {
              "used_in_bytes" : 3193680352,
              "max_in_bytes" : 5019795456,
              "peak_used_in_bytes" : 3865208384,
              "peak_max_in_bytes" : 5019795456
            }
          }
        },

I could see that the node was running out of memory since it exceeded my memory configuration of 50% reserved for indexing using up 62%.

/etc/elasticsearch/elasticsearch.yml configuration


indices.memory.index_buffer_size: 50%
 

 After finding the node causing the problem, I  looked at the log files found under /var/log/elasticsearch folder I could see the exception:

Caused by: java.lang.OutOfMemoryError: Java heap space

[2016-02-29 02:35:39,599][WARN ][bootstrap                ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] This can result in part of the JVM being swapped out.
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited


Double check that the mlockall is true:
[root@webserver logs]# curl http://x.x.x.x:9200/_nodes/process?pretty
{
  "cluster_name" : "cpi",
  "nodes" : {
    "BPKQWgTFTtmT4uh2w7WRLw" : {
      "name" : "cpi1",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 28226,
        "mlockall" : true
      }
    },
    "0LVlWGZvSRu3E8lC6bVzfw" : {
      "name" : "cpi2",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 17308,
        "mlockall" : true
      }
    },
    "07kUt8PPShqMQ8-W_cigWA" : {
      "name" : "cpi3",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 20625,
        "mlockall" : true
      }
    }
  }
}


I modified the /etc/security/limits.conf and added the following lines:
elasticsearch - nofile 65535
elasticsearch - memlock unlimited
root - memlock unlimited


Also,
The number of conccurent threads importing into elasticsearch was reduced from 8 to 5



Thursday, February 18, 2016

Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter

Upon upgrading elasticsearch to 2.1.0 I tried to do an export by pulling in a lot of records in one go and got the following error:

Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter

I suppose the proper fix would be to modify the code to use the scroll API, but for a quick fix, you can use the following:

 curl -XPUT http://1.2.3.4:9200/index/_settings -d '{ "index" : { "max_result_window" : 1000000}}'

Reference:

https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html