Code as documentation

You often read about the value of code comments and good meaningful variable/method/object names. Yet sometimes when your coding your under presume to release a patch, or you think to yourself who wouldn’t understand this code it’s so beautifully laid out and formatted.

Today I fell victim to my own beautifully laid out and formatted code. Yikes!

...
 and left(accno, 1) in ( '2', '3', '4', '5', '9' ) 
...

I’m sure at the time I was very familiar with why accounts that started in that range were important but a few months on I cannot remember.

From this day forth I promise to do Code as Documentation mum!

PostgreSQL: Installing on windows using alternate service username

I came across this scenario the other day. I had to install PostgreSQL 9.0 on a windows box that already had an old PostgreSQL install and hence a local postgres account already setup. Using the same postgres account was not an option as a) I didn’t know the password and b) if the the password ever changed nobody would update the PostgreSQL 9.0 service entry with the updated password.

Fortunately there is an easy way to do this. Launch the one-click installer from the command line with the following arguments

postgresql-9.0.01-1-windows.exe --serviceaccount username

This will allow you to complete the installation as usual but the service account username will be the value passed. You can confirm this on the screen which asks you for the password. You can also complete the whole installation unattended just Launch the one-click installer with –help argument for a full list of options.

PostgreSQL: The joy of generate_series

Using generate_series to build up data as required for reporting using a temporal modeled table.

create table temporal_table (
some_value integer,
date_from timestamp default current_date,
date_to timestamp default '3000/12/31 23:59:59'::timestamp
);

insert into temporal_table values (1, '2010/01/01', '2010/01/05 23:59:59');
insert into temporal_table values (2, '2010/01/06', '2010/01/08 23:59:59');
insert into temporal_table values (3, '2010/01/09', '2010/01/31 23:59:59');
insert into temporal_table values (4, '2010/02/01');

select *
from temporal_table
join
(select generate_series calendar_date
from generate_series('2010/01/01 00:00'::timestamp, '2010/01/31 23:59:59', '1 day')
) as period
on  period.calendar_date between temporal_table.date_from and temporal_table.date_to
where date_from between '2010/01/01' and '2010/01/31 23:59:59'

Determining size of PostgreSQL database

Always handy from PostgreSQL Online Journal

How to determine the size of a database on disk

SELECT pg_size_pretty(pg_database_size('somedatabase')) As fulldbsize;

How to determine the size of a database table on disk

NOTE: There are two functions in PostgreSQL – pg_relation_size and pg_total_relation_size. The pg_relation_size just measures the size of the actual table where as the pg_total_relation_size includes both the table and all its toasted tables and indexes.

SELECT pg_size_pretty(pg_total_relation_size('someschema.sometable')) As fulltblsize, pg_size_pretty(pg_relation_size('someschema.sometable')) As justthetblsize;

How to determine the size of a database schema

When you are doing a lot of data loading for later massaging to dump into your production tables, you will often generate junk tables in the process and worse yet other people may generate junk tables period. As a general practice, we like to create a schema just for junk. The reason for that is that it is extremely easy to exclude schemas from being backed up and to load individual schemas. So with this approach we create or move all tables we are not quite sure if they are needed or we only need temporarily into a schema called junk, scratch, possiblejunk or whatever you want to call the schema and exclude it from our backups.

Something of the form:

CREATE SCHEMA scratch;
ALTER TABLE someschema.joeyplaying SET SCHEMA scratch;

After saving the world from junk, you would like to know how much space you have saved your backups from backing up. So you would employ a query something of the form:


SELECT pg_size_pretty(pg_database_size('mycurrentdb')) As fullprod,
pg_size_pretty(CAST(pg_database_size('mycurrentdb') - (SELECT SUM(pg_total_relation_size(table_schema || '.' || table_name)  )
FROM information_schema.tables WHERE table_schema = 'scratch') As bigint)) As  tobebackedup_size,
pg_size_pretty(CAST((SELECT SUM(pg_total_relation_size(table_schema || '.' || table_name) )
FROM information_schema.tables
	WHERE table_schema = 'scratch') As bigint) )  As junk_size;