Monday, December 10, 2018

Grails 2.4.4 Spring tool suite STS - failed to read artifact descriptor

Recently, we started having issues downloading dependencies for our Grails projects running on Java 7 with the following error:

 failed to read artifact descriptor 
I was basically trying to run a 'grails clean' command on the project through STS eclipse (version 3.6.4). I was aware with the community finally disabling support for version TLSv1 and suspected it might be related. I made all attempts to specify TLSv1.1 and TLSv1.2 in various configurations in STS without any success. I tried in the STS eclipse INI file using the argument

-Dhttps.protocols=TLSv1.1,TLSv1.2
I also tried setting the https.protocol in the JRE definition of eclipse:

Windows -> Preferences -> Java -> Installed JREs -> jdk1.7.0_79 -> Edit -> Default VM arguments

No luck either.

I finally decided to run 'grails command' on the command-line where I had JDK version 8 running by default:


C:\grails\grails-2.4.4\bin\grails clean
| JVM Version: 1.8.0_171
| Application cleaned.

You can see that using Java 8, it was able to download all the dependencies successfully.

It's not ideal to have to use the command-line to get the dependencies downloaded, but once it's all cached locally, then you can switch back to STS to build and run grails-apps.

Sunday, November 25, 2018

Ignoring OSSEC rules

To ignore some errors in OSSEC we can configure our custom rules in /var/ossec/rules/local_rules.xml

In this case I'm going to ignore some Shibboleth errors I received in an email:

 OSSEC HIDS Notification.  
 2018 Nov 26 12:56:27  
 Received From: apn-lsrv01->/etc/httpd/logs/ssl_access_log  
 Rule: 31122 fired (level 5) -> "Web server 500 error code (Internal Error)."  
 Src IP: 150.203.1.1  
 Portion of the log(s):  
 150.203.25.3 - - [26/Nov/2018:12:56:25 +1100] "GET /Shibboleth.sso/NIM/Artifact HTTP/1.1" 500 937  
  --END OF NOTIFICATION  

I've highlighted the relevant parts we'll need in red font above.

Before we add new rules to ignore this error, we need to identify which group it belongs to.


 cd /var/ossec/rules  
 grep -lir 31122 .  
 ./web_rules.xml  

Here we can see that the rule 31122 exists in the file web_rules.xml. Therefore the group that the rule belongs to is 'web'

Now let's analyze how ossec will decode the log error using tool called ossec-logtest.
Start ossec-logtest, run the command: /var/ossec/bin/ossec-logtest
Then copy and paste the portion of the log you received in the email, and you should get a response similar to this:


 [root@apn-lsrv01 bin]# ./ossec-logtest  
 2018/11/26 13:08:06 ossec-testrule: INFO: Reading local decoder file.  
 2018/11/26 13:08:06 ossec-testrule: INFO: Started (pid: 8696).  
 ossec-testrule: Type one log per line.  
 150.203.25.3 - - [26/Nov/2018:12:56:25 +1100] "GET /Shibboleth.sso/NIM/Artifact HTTP/1.1" 500 937  
 **Phase 1: Completed pre-decoding.  
     full event: '150.203.25.3 - - [26/Nov/2018:12:56:25 +1100] "GET /Shibboleth.sso/NIM/Artifact HTTP/1.1" 500 937'  
     hostname: 'apn-lsrv01'  
     program_name: '(null)'  
     log: '150.203.25.3 - - [26/Nov/2018:12:56:25 +1100] "GET /Shibboleth.sso/NIM/Artifact HTTP/1.1" 500 937'  
 **Phase 2: Completed decoding.  
     decoder: 'web-accesslog'  
     srcip: '150.203.25.3'  
     srcuser: '-'  
     action: 'GET'  
     url: '/Shibboleth.sso/NIM/Artifact'  
     id: '500'  
 **Phase 3: Completed filtering (rules).  
     Rule id: '31122'  
     Level: '5'  
     Description: 'Web server 500 error code (Internal Error).'  
 **Alert to be generated.  

Here we can see that OSSEC decoded the log error with a url as '/Shibboleth.sso/NIM/Artifact'

This means when we write our rule to ignore this error, we need to specify the rule using a URL.

Now we can proceed to create our rule by editing the /var/ossec/rules/local_rules.xml by adding the following to the end of the file:


 <group name="web," >  
  <rule id="100032" level="0">  
   <if_sid>31122</if_sid>  
   <url>/Shibboleth.sso</url>  
   <description>Ignore Shibboleth</description>  
  </rule>  
 </group>  


  • In this rule we specified that the rule belongs to group called 'web'.
  • The rule ID to which we are processing has ID 31122.
  • And the URL should start with /Shibboleth.sso

We can rerun our ossec-logtest without having to restart OSSEC.
Now if we rerun ossec-logtest we should see the following:

 [root@apn-lsrv01 bin]# ./ossec-logtest  
 2018/11/26 13:11:17 ossec-testrule: INFO: Reading local decoder file.  
 2018/11/26 13:11:17 ossec-testrule: INFO: Started (pid: 9181).  
 ossec-testrule: Type one log per line.  
 150.203.25.3 - - [26/Nov/2018:12:56:25 +1100] "GET /Shibboleth.sso/NIM/Artifact HTTP/1.1" 500 937  
 **Phase 1: Completed pre-decoding.  
     full event: '150.203.25.3 - - [26/Nov/2018:12:56:25 +1100] "GET /Shibboleth.sso/NIM/Artifact HTTP/1.1" 500 937'  
     hostname: 'apn-lsrv01'  
     program_name: '(null)'  
     log: '150.203.25.3 - - [26/Nov/2018:12:56:25 +1100] "GET /Shibboleth.sso/NIM/Artifact HTTP/1.1" 500 937'  
 **Phase 2: Completed decoding.  
     decoder: 'web-accesslog'  
     srcip: '150.203.25.3'  
     srcuser: '-'  
     action: 'GET'  
     url: '/Shibboleth.sso/NIM/Artifact'  
     id: '500'  
 **Phase 3: Completed filtering (rules).  
     Rule id: '100032'  
     Level: '0'  
     Description: 'Ignore Shibboleth'  

After all that testing , we are now ready to release our changes by restarting OSSEC.

Wednesday, October 31, 2018

Exetel vs Myrepublic NBN speed test Canberra

Time 5:30pm

Exetel 

Download: 45.5 Mbps
Upload: 15 Mbps

MyRepublic

Download: 40 Mbps
Upload: 8 Mbps

Exetel is faster, cheaper and has better phone plans and provides static ip for free. Exetel wins hands down.

Tuesday, August 28, 2018

OSSEC postfix email using localhost doesn't work

OSSEC had issues sending me emails with the following error message in the /var/ossec/logs/ossec.log

ERROR: Error Sending email to localhost (smtp server)

OSSEC was configured to use POSTFIX as my SMTP host as configured in the /var/ossec/etc/ossec-server.conf

  <global>  
   <email_notification>yes</email_notification>  
   <email_to>philip.wu@anu.edu.au</email_to>  
   <smtp_server>localhost</smtp_server>  
   <email_from>patient-lookup@130.56.244.180</email_from>  
  </global>  


Once I changed localhost to 127.0.0.1, postfix emails worked:

  <global>  
   <email_notification>yes</email_notification>  
   <email_to>philip.wu@anu.edu.au</email_to>  
   <smtp_server>127.0.0.1</smtp_server>  
   <email_from>patient-lookup@130.56.244.180</email_from>  
  </global>  

Reference:

https://github.com/ossec/ossec-hids/issues/1122 

Tuesday, August 21, 2018

Encrypting postgres backups

Lately I've been dabbling in the world of security. While I'd more interested doing other things like building features and tackling research problems, security is something that should be part of every day thinking when designing solutions. One area of security focuses on databases.

While I've made the effort to doubly encrypt the postgres data at rest: One at the table column level, where certain fields are encrypted and two, at the file system level as a separate attached volume where postgres lives, these efforts would be useless if the database backups were stored as plain text. True, the encrypted fields would remain encrypted, but for peace of mind, let's encrypt the backups themselves!

Here I'll using GPG (GNU Privacy Guard) encryption on a Centos 7 machine with a postgres database. While there is a lot of information about GPG on the web, I couldn't find a comprehensive article on how to do this. So here we go!

First let's install GPG


yum install gnupg2


Since I'm using the postgres user to perform the automated backups with ident authentication, we need to switch to the postgres user (assuming we are already the root user):


# become the postgres user
su postgres


When generating GPG keys, it will ask for a passphrase using TTY. Unfortunately, GPG doesn't work well when running the terminal in an 'su session' just as we have done with the above command. To workaround this, we issue the following command:

# workaround to generate gpg key in a su session as postgres
script /dev/null

Redirecting the script to /dev/null causes screen to not try to write to the controlling terminal, so it doesn't hit the permission problem.

Now can generate the GPG keys for the postgres user. You will be asked for a passphrase - keep this some where safe.

bash-4.2$ gpg2 --gen-key
gpg (GnuPG) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection?
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: postgres
Email address:
Comment:
You selected this USER-ID:
    "postgres"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

The important thing to take note of is the 'Real name' which I've specified as 'postgres'. We will use this 'Real name' later when we perform the encryption.

At this stage, it seemed to just hang without any idea if it was doing anything at all. In my first attempt, I had let it sit for over an hour and still nothing. Turns out Entropy takes a long time if there's no system activity. So let's introduce 'random activity' in another terminal:


yum install rng-tools
rngd -r /dev/urandom



After running the rngd command, you notice almost immediately in the other terminal, that the GPG key gen has complated. Now you can kill the rngd process that's still running in the background.


ps -aux | grep rngd
root     25652  0.0  0.0  13216   368 ?        Ss   14:37   0:00 rngd -r /dev/urandom
root     25665  0.0  0.0 112704   976 pts/0    S+   14:37   0:00 grep --color=auto rngd
kill -9 25652


To troubleshoot entropy availability, you can monitor entropy availability here which should sit at around 1450 when idle. When being consumed, it should be much lower:

watch cat /proc/sys/kernel/random/entropy_avail

Now that we have our GPG keys, we are ready to encrypt files. Here I've created a script to execute the postgres backups, compression and encryption all in one step:

pg_dump -U postgres db_name | gzip > /backups/db_backup_$(date +%Y-%m-%d).psql.gz
gpg -e –r postgres /home/backups/patient_lookup_$(date +%Y-%m-%d).psql.gz
rm -rf /home/backups/patient_lookup_$(date +%Y-%m-%d).psql.gz
chmod 0600 -R /backups/*.gpg

The first line using pg_dump generates a compressed GZ backup file.
The second line then takes the GZ file and encrypts it, creating a new GPG file. The -e argument tells GPG to encrypt and the -r argument specifies the recipient which in this case is the postgres user that we specified earlier when generating the GPG keys.
Since GPG creates a new file, we remove the GZ file in the third line.
Then we only allow read/write permissions for the postgres user on the fourth line.

You can run the script on a cron job to routinely do your backups.

Of course, before you put this into production, you should check to ensure you can successfully decrypt the backups.

su postgres
script /dev/null
gpg postgres_backup.gpg

If this helped you please like! Thx

References:

Sunday, July 22, 2018

SELINX and postgres troubles

OS version: Centos 7

Upon enabling SELINUX, I noticed that the postgres service hadn't started. When I checked the logs I noticed the following error message:


 [root@webserver data]# systemctl status postgresql.service  
 ● postgresql.service - PostgreSQL database server  
   Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)  
   Active: failed (Result: exit-code) since Sun 2018-07-22 23:18:52 UTC; 8s ago  
  Process: 2903 ExecStart=/usr/bin/pg_ctl start -D ${PGDATA} -s -o -p ${PGPORT} -w -t 300 (code=exited, status=1/FAILURE)  
  Process: 2897 ExecStartPre=/usr/bin/postgresql-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)  
 Jul 22 23:18:51 webserver.novalocal systemd[1]: Starting PostgreSQL database server...  
 Jul 22 23:18:51 webserver.novalocal pg_ctl[2903]: postgres cannot access the server configuration file "/var/lib/pgsql/data/postgresql.conf": Permission denied  
 Jul 22 23:18:52 webserver.novalocal pg_ctl[2903]: pg_ctl: could not start server  
 Jul 22 23:18:52 webserver.novalocal systemd[1]: postgresql.service: control process exited, code=exited status=1  
 Jul 22 23:18:52 webserver.novalocal systemd[1]: Failed to start PostgreSQL database server.  
 Jul 22 23:18:52 webserver.novalocal systemd[1]: Unit postgresql.service entered failed state.  
 Jul 22 23:18:52 webserver.novalocal systemd[1]: postgresql.service failed.  

To view the SELinux security context:
 [root@webserver var]# ls -Z /var/lib/pgsql/data/  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 base  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 global  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_clog  
 -rw-------. postgres postgres system_u:object_r:unlabeled_t:s0 pg_hba.conf  
 -rw-------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_ident.conf  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_log_t:s0 pg_log  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_multixact  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_notify  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_serial  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_snapshots  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_stat_tmp  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_subtrans  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_tblspc  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_twophase  
 -rw-------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 PG_VERSION  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_xlog  
 -rw-------. postgres postgres system_u:object_r:default_t:s0  postgresql.conf  
 -rw-------. postgres postgres system_u:object_r:postgresql_db_t:s0 postmaster.opts  

We can see that the postgresql.conf file was incorrectly assigned a type of default_t.

I noticed there were several other files in the postgresql data folder that had a similar problem. To fix the type for all files under the data folder run the following command:


 chcon -R system_u:object_r:postgresql_db_t:s0 /var/lib/pgsql/data/**  

Rechecking the SElinux contexts:

 [root@webserver var]# ls -Z /var/lib/pgsql/data/  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 base  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 global  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_clog  
 -rw-------. postgres postgres system_u:object_r:unlabeled_t:s0 pg_hba.conf  
 -rw-------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_ident.conf  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_log_t:s0 pg_log  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_multixact  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_notify  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_serial  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_snapshots  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_stat_tmp  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_subtrans  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_tblspc  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_twophase  
 -rw-------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 PG_VERSION  
 drwx------. postgres postgres unconfined_u:object_r:postgresql_db_t:s0 pg_xlog  
 -rw-------. postgres postgres system_u:object_r:postgresql_db_t:s0 postgresql.conf  
 -rw-------. postgres postgres system_u:object_r:postgresql_db_t:s0 postmaster.opts  

Now that it's fixed, turn on postgresql


 service postgresql start  

References:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-working_with_selinux-selinux_contexts_labeling_files

Thursday, May 24, 2018

Gradle OutOfMemoryError



·         OutOfMemoryError – It’s possible that gradle is running in 32-bit mode when it should be running in 64 bit mode. To check whether gradle is running in 32-bit mode or 64-bit mode, in the build.gradle file, dump out a few system properties as follows:
println System.properties['os.arch']
println System.properties['sun.arch.data.model']

If sun.arch.data.model has a value of 32, then it’s running in 32-bit mode.

Double check that the JAVA_HOME environment variable is set to a path similar to
C:\Program Files\Java\jdk1.8.0_171
If the path points to C:\Program Files (x86), then it is likely to be running in 32 bit mode. In this case reinstall jdk for java in 64-bit.
Another symptom of running in 32-bit mode is if you try increasing the memory allocation higher than 1 GB you may get the following error: 

C:\Users\Philip\git\lims>gradle clean
Error occurred during initialization of VM
Could not reserve enough space for 2097152KB object heap