Tuesday, December 13, 2016

Grails 3.x "Execution failed for task ':bootRun'"

When trying to run the grails application using the command "grails run-app", I encountered the following error:

 FAILURE: Build failed with an exception.  
 * What went wrong:  
 Execution failed for task ':bootRun'.  
 > Process 'command '/usr/java/jdk1.8.0_101/bin/java'' finished with non-zero exit value 1  

Even after looking at the stacktrace, it wasn't clear what was causing this error. And it seems from many google searches, that there are multiple reasons for getting such an error. But none of the solutions helped me.

As it turns out, it seems Grails 3.x doesn't like having Loggers declared in domain classes.

For example, I had the following class definition:

 class MyDomainClass {  
   Logger logger = Logger.getLogger(MyDomainClass.class)  
 }  

Once I removed the Logger declaration, then my application ran successfully. I know from experience, that previous versions of Grails 2.x allowed loggers in domain classes, but it seems it's not supported anymore?

I hope this helps somebody else from painful hours of debugging!

Thursday, December 8, 2016

Executing command-line using Groovy

Running command-line executions through Groovy may at first seem straight-forward. But after thoroughly stepping through a real world example, I've found the need to write up a function  that will make this process a little easier, as shown in the code below:


      /**  
       * Execute on the command line  
       * @param command  
       * @return  
       */  
      private String execCommand(Object command, OutputStream os=null, Boolean doLog = true, Boolean ignoreStdErr = false) {  
           boolean doReturn = false  
           if (os == null) {  
                os = new ByteArrayOutputStream()  
                doReturn = true  
           }  
           ByteArrayOutputStream serr = new ByteArrayOutputStream()  
           if (doLog) {  
                logger.info("Running command: "+command)  
           }  
           Process p = command.execute()  
           p.consumeProcessOutput(os, serr)  
           p.waitFor()            
           //p.waitForProcessOutput()  
           //p.waitForOrKill(30000)  
           if (! ignoreStdErr && serr.size() > 0) {  
                logger.error(serr.toString())  
                throw new Exception(serr.toString())  
           }  
           if (doReturn)  
                return new String(os.toString())  
           else  
                return null  
      }  

Apache Camel and YAML processing example

I had trouble finding a full example of how to process YAML files using Apache Camel. So here's an example I've written:


         from("file:/home/philip/ga4gh_registration?move=archived")  
           .choice()  
             .when(header("CamelFileName").endsWith(".yml"))  
               .unmarshal()  
               .yaml(YAMLLibrary.SnakeYAML)  
               .process(new Processor() {  
                 @Override  
                 void process(Exchange exchange) throws Exception {  
                   String filename = exchange.getIn().getHeader("CamelFileName")  
                   logger.info("registerDataset: "+filename)  
                   def yamlObj = exchange.getIn().body  
                   logger.info("body="+yamlObj)  
                   logger.info("datasetName="+yamlObj.datasetName)  
                 }  
               })  
             .otherwise()  
               .log("Bad registration file: "+header("CamelFileName"))  
           .end()  

References:


http://camel.apache.org/yaml-data-format.html

Wednesday, December 7, 2016

Grails 3.x + SQLite

Grails 3.x does not provide support SQLite out of the box. But with little effort, we can get something working fairly quickly as shown in the steps I've outlined below:

SQLite JDBC


Add the following lines to build.gradle to add the dependency for SQLite JDBC:

build.gradle
 compile 'org.xerial:sqlite-jdbc:3.15.1'  

You can find other versions of the JDBC library from the maven repo here:

https://mvnrepository.com/artifact/org.xerial/sqlite-jdbc


SQLite Dialect

Clone and build the Github project to add the supporting classes for the SQLite Dialect from here:



Using maven to build, run the following command:

 mvn -DskipTests install  

Then take the resulting JAR file, sqlite-dialect-0.1.0.jar, under the /target folder and copy it to the lib folder in your Grails project.

In Grails 3.x, the lib folder no longer exists by default. Create a new folder in the root of your project

 mkdir $PROJECT_HOME/lib  

Copy the JAR file to the lib folder just created. For example:
 cp sqlite-dialect-0.1.0.jar /home/philip/agha/VariantStorage/lib  

Finally add the dependency of the JAR file in the build.gradle file:
 compile files('lib/sqlite-dialect-0.1.0.jar')  

Tying it together

Datasources are configured in application.yml:

 dataSource:  
   dbCreate: "update"  
   url: "jdbc:sqlite:/home/philip/ga4gh-server-env/registry.db"  
   logSql: "true"  
   dialect: "org.hibernate.dialect.SQLiteDialect"  
   driverClassName: "org.sqlite.JDBC"  



References:

http://stackoverflow.com/questions/19691940/grails-and-sqlite

http://stackoverflow.com/questions/32339950/how-to-add-a-non-mavenized-jar-dependency-to-a-grails-project-grails-3-x


Sunday, September 18, 2016

mongoimport and updating data types

Let's say I import a CSV file using the following command

mongoimport -d importtest --collection documents --type csv --file documents.csv  --headerline

mongoimport is a pretty handy tool for importing your existing records stored in CSV or JSON format, however, it does have it's limitations. Often the datatypes inferred by mongoimport may not always be correct and usually results in assigning a field as a datatype of String.

For example,

If I have a date with the following format

2012-07-14T01:00:00+01:00
mongoimport will assign a field value of '2012-07-14T-01:00:00+01:00' as the literal string.

To fix this, we use the mongo client command as follows:

db.documents.find({}).forEach( function (d) { d.dateCreated = new ISODate(d.dateCreated); db.documents.save(d); });

Here I convert the string to an ISODate.

This same technique can be applied to other datatypes such as Integer, Boolean etc...

Tuesday, September 6, 2016

GA4GH Reference Server - v.0.3.3 installation on CentOS 7

Here I will present the steps taken to install the GA4GH Reference Server - Version 0.3.3 onto CentOS 7. For the official installation guide please visit ga4gh installation guide as a reference.

This installation will use apache web server as the front-end to service requests with ga4gh running behind apache.

Install essential library packages

$ yum install python-devel python-virtualenv zlib-devel libxslt-devel openssl-devel libffi-devel redhat-rpm-config ncurses-devel ncurses samtools

Install Apache web server

yum install httpd
yum install mod_wsgi
mkdir /var/cache/httpd/python-egg-cache
chown apache:apache /var/cache/httpd/python-egg-cache

Create a file /etc/httpd/conf.d/ga4gh.conf with the following contents:

 <VirtualHost *:80>  
 WSGIDaemonProcess ga4gh processes=10 threads=1 python-path=/srv/ga4gh/ga4gh-serverenv/lib/python2.7/site-packages python-eggs=/var/cache/httpd/python-egg-cache  
 WSGIScriptAlias /ga4gh /srv/ga4gh/application.wsgi  
 <Directory /srv/ga4gh>  
   WSGIProcessGroup ga4gh  
   WSGIApplicationGroup %{GLOBAL}  
   Require all granted  
 </Directory>  
 </VirtualHost>  

Install ga4gh reference server


 mkdir –p /srv/ga4gh  
 cd /srv/ga4gh  
 virtualenv ga4gh-server-env  
 source ga4gh-server-env/bin/activate  
 pip install ga4gh  

Install some missing python packages that weren't automatically installed:
pip install flask
pip install flask-cors
pip install humanize
pip install oic
pip install protobuf
pip install pysam

When I tried to use some of the ga4gh command line scripts, I ran into some errors like the following:
[root@gatkslave ga4gh]# source ga4gh-server-env/bin/activate
(ga4gh-server-env)[root@gatkslave ga4gh]# ga4gh_repo init registry.db
Traceback (most recent call last):
  File "/srv/ga4gh/ga4gh-server-env/bin/ga4gh_repo", line 5, in
    from pkg_resources import load_entry_point
  File "/srv/ga4gh/ga4gh-server-env/lib/python2.7/site-packages/pkg_resources.py", line 3007, in
    working_set.require(__requires__)
  File "/srv/ga4gh/ga4gh-server-env/lib/python2.7/site-packages/pkg_resources.py", line 728, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/srv/ga4gh/ga4gh-server-env/lib/python2.7/site-packages/pkg_resources.py", line 626, in resolve
    raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: sphinx-argparse==0.1.15


It appears that several python packages were still missing. I used the following commands to install them all, one by one:
Pip install sphinx-argparse==0.1.15
pip install lxml==3.4.4
pip install pyOpenSSL==0.15.1
pip install oic==0.7.6
pip install requests==2.7.0
pip install pysam==0.9.0
pip install protobuf==3.0.0.b3
pip install Flask==0.10.1
pip install Flask-Cors==2.0.1
pip install pyjwkest==1.0.1
pip install Jinja2==2.7.3
pip install pycparser==2.14
pip install cffi==1.5.2
yum install libffi-devel
pip install ipaddress==1.0.16
pip install enum34==1.1.2
pip install pyasn1==0.1.9
pip install idna==2.1
pip install cryptography==1.3.1
pip uninstall pycryptodomex
pip uninstall pcryptodome

Disable SELinux
# dislable selinux
setenforce 0

Make the /srv readable and writable:
chmod -R +x /srv

Create the WSGI file at /srv/ga4gh/application.wsgi with the following contents:
from ga4gh.frontend import app as application
import ga4gh.frontend as frontend
frontend.configure("/srv/ga4gh/config.py")

Create the configuration file at /srv/ga4gh/config.py with the following contents:
DATA_SOURCE = "/srv/ga4gh/ga4gh-example-data/repo.db"

Install bgzip

cd /usr/src
bzip2 –d htslib-1.3.1.tar.dz2
tar xvf htslib-1.3.1.tar
cd htslib-1.3.1
make
make prefix=/usr install


Data import

Reference set

wget ftp://ftp.1000genomes.ebi.ac.uk//vol1/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5.fa.gz
gunzip hs37d5.fa.gz
bgzip hs37d5.fa
ga4gh_repo add-referenceset registry.db /srv/ga4gh/hs37d5.fa.gz  -d "NCBI assembly of the human genome" --ncbiTaxonId 9606 --name NCBI37

Ontology

wget https://raw.githubusercontent.com/The-Sequence-Ontology/SO-Ontologies/master/so-xp.obo

ga4gh_repo add-ontology registry.db /srv/ga4gh/so-xp.obo -n so-xp

Create a new dataset

ga4gh_repo add-dataset registry.db NA12878_sample1_rerun_sg1_snvcalls --description "1000genomes genome, an Illumina platinum genome and  one that the Kinghorn guys use for testing their sequencer"

 

Importing VCFs

If you have a bunch of VCFs in a directory. You can loop through each file to bgzip each of them:
for i in *.vcf; do bgzip $i; done

Run tabix on each of the bgzip files
for i in *.gz; do tabix $i; done

ga4gh_repo add-variantset registry.db NA12878_sample1_rerun_sg1_snvcalls /srv/ga4gh/datasets/NA12878_sample1_rerun_sg1_snvcalls --name NA12878_sample1_rerun_sg1 --referenceSetName NCBI37

Test query

curl -X POST -H 'Content-Type:application/json' -d '{"variantSetId": "WyJOQTEyODc4X3NhbXBsZTFfcmVydW5fc2cxX3NudmNhbGxzIiwidnMiLCJOQTEyODc4X3NhbXBsZTFfcmVydW5fc2cxIl0", "referenceName":"22","start":17190024,"end":"17671934"}' http://your.server/ga4gh/variants/search

Thursday, June 9, 2016

Meteor Template undefined

I was trying to invoke a function from within a template. For example in html file i had the following:




 <template name="docTypesTemplate">  
               {{#each types}}  
                 do something ...  
               {{/each}}    
 </template>  


And in my JS file I had:


 Template.body.helpers({    
   types() {    
     return DocTypes.find({});  
   },   
 });  

Sounds pretty straight forward right? You'd expect that you would be able to use the types() function inside your template? WRONG! Instead you will that your variables are 'undefined'.

It turns the types() function is meant to be used within the body of the html ONLY.
So if you want to use the types() function from within a template, you will need to redefine it again as follows:


 Template.registerHelper("types", function() {      
   return DocTypes.find({});    
 });  

I hope this helps somebody else out there as this took me a while to figure out

Cheers!

Thursday, May 19, 2016

Meteor TypeError: undefined is not a function

In Meteor, I was adding a new mongo collection using the following code:

 export const Areas = new Mongo.collection('areas');  

The app restarted and I got the following error:

 TypeError: undefined is not a function  
   at meteorInstall.imports.api.documents.js (imports/api/documents.js:8:22)  
   at fileEvaluate (packages/modules-runtime/.npm/package/node_modules/install/install.js:141:1)  
   at require (packages/modules-runtime/.npm/package/node_modules/install/install.js:75:1)  
   at meteorInstall.server.main.js (server/main.js:2:1)  
   at fileEvaluate (packages/modules-runtime/.npm/package/node_modules/install/install.js:141:1)  
   at require (packages/modules-runtime/.npm/package/node_modules/install/install.js:75:1)  
   at C:\Users\Philip\doc-revision\.meteor\local\build\programs\server\app\app.js:240:1  
   at C:\Users\Philip\doc-revision\.meteor\local\build\programs\server\boot.js:283:10  
   at Array.forEach (native)  
   at Function._.each._.forEach (C:\Users\Philip\AppData\Local\.meteor\packages\meteor-tool\1.3.2_4\mt-os.windows.x86_32\dev_bundle\server-lib\node_modules\underscore\underscore.js:79:11)  

It turns out that the word 'collection' should have been 'Collection'. This one was difficult to spot, but I eventual found it. The following code fixes the error


 export const Areas = new Mongo.Collection('areas');  

Wednesday, May 18, 2016

Grails 2.x + Webflow + Services defined in domain classes = Fail

Environment:

  • Grails 2.5.4
  • Webflow plugin 2.1.0

I know it's been documented that you can define services inside domain classes for a Grails project. And for the most part this works, until you want to use them in a webflow.

We started noticing some very strange behaviour where the domain classes stored in flow scope started missing property values. For some apparent reason, as the user progresses through the webflow, the properties of these domain classes would reset to null!!

The fix was to remove any service class definitions inside domain classes.


Tuesday, May 3, 2016

Meteor datepicker formatting

Meteor version: 1.3.2.4
Package: rajit:boostrap3-datepicker (https://atmospherejs.com/rajit/bootstrap3-datepicker)




 Template.body.rendered=function() {  
   $('.datepicker').datepicker({  
     format: "dd/mm/yyyy"  
   });  
 }  

Monday, April 11, 2016

ReferenceError: check is not defined

Meteor version: v1.3

I was following the tutorial on the meter website found here:

https://www.meteor.com/tutorials/blaze/security-with-methods

In the tutorial they recommend checking user input using a command as follows:

check(text, String);
However, when I go to run the application, I got the error message:

Exception while invoking method 'tasks.remove' ReferenceError: check is not defined
 No where in the tutorial did it mention that you had to add the 'check' package. So running the following command solved the problem:

meteor add check

Thursday, April 7, 2016

Meteor - TypeError: Cannot call method 'config' of undefined

Following the instructions on the meteor.js tutorial site:

https://www.meteor.com/tutorials/blaze/adding-user-accounts

I ran into the following error:

TypeError: Cannot call method 'config' of undefined
 There was an import statement that I had to add to a main.js file

import '../imports/startup/accounts-config.js';

It turns out I had added the  import line into the wrong 'main.js' file.
There are 2 main.js files: One in the client folder and the other in the server folder.


Once I moved the import statement to the client version, the app started working again


'npm' is not a Meteor command

I had previously installed Meteor months ago with the intention of giving it a go, but never got around to it. Fast forward to today, I tried to reinstall Meteor using the latest installer for version 1.3. Then I proceeded to follow the tutorials where one step required me enter the command:

meteor npm install

I received the following error:

'npm' is not a Meteor command. See 'meteor --help'.

It turns out that my version of Meteor was not actually updated by the installer. To check my meteor version I used the following command:

meteor --version

Which showed me "Meteor 1.2.0.2".

Solution

To upgrade meteor, run the following command:

meteor update
Reference: https://forums.meteor.com/t/installing-meteor-1-3-with-an-existing-1-2-left-in-place/20670

Thursday, March 17, 2016

Grails MappingException: Missing type or column for column

I've been trying to upgrade an older project from grails 2.3.9 to grails 2.4.4.
I've done this sort of upgrade with many of my other projects without issues, but this time around I got stumped on the following error:

 MappingException: Missing type or column for column
The error message was not particularly helpful, since I know the domain class mapping was correct in a previous version of Grails. The way associations between domain classes, hasn't changed. So find the underlying cause wasn't easy and I wanted to share it with you in case you run into a similar problem.

 In my domain classes there many getters methods that had no corresponding field defined. For example, I had a getter called, getFoo()  but there was no corresponding field in the domain class called 'foo'. When using such getter methods in Grails, normally we should define a static list of transients as follows:


static transients = ['foo']

Once these transients were defined, then the error messages stopped


Wednesday, March 9, 2016

Grails decoding default html encoding

I was trying to embed a URL with ampersands '&' in my HTML page using the following expression:

GSP file
${url}


and since my default codec in Config.groovy was set to 'html':

Config.groovy
grails.views.default.codec = "html"

The ampersand was always being converted to

&

What I wanted was just the '&' character in unencoded format.

To do that we can use the raw() function as follows:

GSP file
${raw(url)}

References:
http://stackoverflow.com/questions/1337464/overriding-grails-views-default-codec-html-config-back-to-none

Wednesday, March 2, 2016

ElasticSearch - Failed bulk item: RemoteTransportException

I'm having issues trying to run a bulk import of millions of records into elasticsearch which results in the error:

Failed bulk item: RemoteTransportException

 I've tried several changes to the configuration of the elasticsearch.yml file but continue to get the above error. To review the status of each node I executed the following command:

curl -XGET http://ip_address:9200/_nodes/stats?pretty

To identify which node is causing the rejections, I looked for thread_pool.bulk stats for each node and I found, that one of nodes had a high rejection number:


      "thread_pool" : {
        "bulk" : {
          "threads" : 100,
          "queue" : 0,
          "active" : 0,
          "rejected" : 1482,
          "largest" : 100,
          "completed" : 121512
        },


I had a look at the memory profile by looking under jvm.mem stats:


        "mem" : {
          "heap_used_in_bytes" : 3336515992,
          "heap_used_percent" : 62,
          "heap_committed_in_bytes" : 5333843968,
          "heap_max_in_bytes" : 5333843968,
          "non_heap_used_in_bytes" : 55600912,
          "non_heap_committed_in_bytes" : 84336640,
          "pools" : {
            "young" : {
              "used_in_bytes" : 121153000,
              "max_in_bytes" : 279183360,
              "peak_used_in_bytes" : 279183360,
              "peak_max_in_bytes" : 279183360
            },
            "survivor" : {
              "used_in_bytes" : 21682640,
              "max_in_bytes" : 34865152,
              "peak_used_in_bytes" : 34865152,
              "peak_max_in_bytes" : 34865152
            },
            "old" : {
              "used_in_bytes" : 3193680352,
              "max_in_bytes" : 5019795456,
              "peak_used_in_bytes" : 3865208384,
              "peak_max_in_bytes" : 5019795456
            }
          }
        },

I could see that the node was running out of memory since it exceeded my memory configuration of 50% reserved for indexing using up 62%.

/etc/elasticsearch/elasticsearch.yml configuration


indices.memory.index_buffer_size: 50%
 

 After finding the node causing the problem, I  looked at the log files found under /var/log/elasticsearch folder I could see the exception:

Caused by: java.lang.OutOfMemoryError: Java heap space

[2016-02-29 02:35:39,599][WARN ][bootstrap                ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] This can result in part of the JVM being swapped out.
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-02-29 02:35:39,600][WARN ][bootstrap                ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited


Double check that the mlockall is true:
[root@webserver logs]# curl http://x.x.x.x:9200/_nodes/process?pretty
{
  "cluster_name" : "cpi",
  "nodes" : {
    "BPKQWgTFTtmT4uh2w7WRLw" : {
      "name" : "cpi1",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 28226,
        "mlockall" : true
      }
    },
    "0LVlWGZvSRu3E8lC6bVzfw" : {
      "name" : "cpi2",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 17308,
        "mlockall" : true
      }
    },
    "07kUt8PPShqMQ8-W_cigWA" : {
      "name" : "cpi3",
      "transport_address" : "x.x.x.x:9300",
      "host" : "x.x.x.x",
      "ip" : "x.x.x.x",
      "version" : "2.1.2",
      "build" : "63c285e",
      "http_address" : "x.x.x.x:9200",
      "process" : {
        "refresh_interval_in_millis" : 1000,
        "id" : 20625,
        "mlockall" : true
      }
    }
  }
}


I modified the /etc/security/limits.conf and added the following lines:
elasticsearch - nofile 65535
elasticsearch - memlock unlimited
root - memlock unlimited


Also,
The number of conccurent threads importing into elasticsearch was reduced from 8 to 5



Thursday, February 18, 2016

Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter

Upon upgrading elasticsearch to 2.1.0 I tried to do an export by pulling in a lot of records in one go and got the following error:

Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter

I suppose the proper fix would be to modify the code to use the scroll API, but for a quick fix, you can use the following:

 curl -XPUT http://1.2.3.4:9200/index/_settings -d '{ "index" : { "max_result_window" : 1000000}}'

Reference:

https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html

Sunday, February 14, 2016

elasticsearch 2.1.0 and "Unable to get effective rights from ACL: Overlapped I/O operation is in progress" error

When starting elasticsearch 2.1.0 on Windows 7 I got the following error below:


Likely root cause: java.io.IOException: Unable to get effective rights from ACL: Overlapped I/O operation is in progress.

        at sun.nio.fs.WindowsFileSystemProvider.getEffectiveAccess(WindowsFileSystemProvider.java:344)
        at sun.nio.fs.WindowsFileSystemProvider.checkAccess(WindowsFileSystemProvider.java:397)
        at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:246)
        at org.elasticsearch.bootstrap.Security.addPath(Security.java:227)
        at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:191)
        at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:184)
        at org.elasticsearch.bootstrap.Security.configure(Security.java:105)
        at org.elasticsearch.bootstrap.Bootstrap.setupSecurity(Bootstrap.java:196)
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:167)
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)

When I checked the java version being used I found it was using jdk1.7.0_21

echo %JAVA_HOME%

Upgrading to jdk1.8.0_65 resolved the issue

Monday, January 18, 2016

VCFtools and Tabix installation on Linux (RHEL)

VCFtools installation


Download vcftools from here: https://github.com/vcftools/vcftools/zipball/master
 Unzip the downloaded package and proceed to install RHEL packages required to build vcftools using the following:

 


yum install autoconf
yum install automake
yum install gcc
yum group install “Development Tools”
yum install zlib
yum install zlib-devel

export PERL5LIB=/path/to/your/vcftools-directory/src/perl/
cd vcftools/
./autogen.sh
./configure
make
make install



The binary executables will be installed in the /usr/local/bin folder.
To add the executables to your path use the following:
export PATH=$PATH:/usr/local/bin


To set it permanently add the following line to the /etc/profile file just before it gets exported:
PATH=$PATH:/usr/local/bin:/usr/local/tabix

Tabix installation

(1) Go here to download the newest release.

(2) Extract the file:

tar xvjf tabix-0.2.6.tar.bz2

(3) Compile the program by typing make on the UNIX command line.
(4) Export the path by adding the following line to your .bashrc file, saving your .bashrc file, and typing source on the UNIX command line.  Note: path_to_tabix is the directory where tabix is installed.


export PATH=$PATH:/path_to_tabix/tabix-0.2.6

 

References:


https://vcftools.github.io/examples.html
http://genometoolbox.blogspot.com.au/2013/11/installing-tabix-on-unix.html