Postgres… Postgre… PostgreSQL… whats in a name. A decision to use Postgres over SQL Server

What’s all this noise?

postgreswalter

Yeah that’s right I said it we are using Postgres not SQL Server for our .Net project. Crazy talk, blasphemy, get a rope!

Now that I have your attention I am going to attempt to lay out the decision making process to go with Postgres, a few little gotchas for the average .Net developer, some, hopefully interesting, Postgres history, some definitions and stuff you may wanna know, and lastly some things that are still lingering in my mind about this decision.

Now that the adieu is finished let’s further.

History is fun!

I am terrible at naming things, but Apparently they are worse

The most important thing you need to deal with when choosing postgres is what’s it called and how do you pronounce it.  Well that is a tasty burger and it depends on who you ask, but apparently Tom Lane the lead developer just calls it postgres. Pronounced just likes its spelled. No silent letters. So call it what you want in the privacy of your own home, but in this document, or if you speak to me about it in hallways we will call it postgres.

From the horse’s mouth… neighhhhhh

Things or stuff I find interesting (You can skip this if you’re lame)

So now that we know what to call it let’s dive in a bit into its origin story. Postgres is an object relational database management system or ORDBMS for short, but really who cares. What does that mean to me?

Let’s digress for a minute. So an ORDBMS is like any other relational database except that it uses an object oriented database model so you have things like classes and inheritance in both the database schema and the query language. Wow so are you telling me one table can inherit from another like my object oriented classes?  Why yes they can, but is that the question you should be asking, more on this later?  This gives it the ability to walk the fine line between pure relational databases, which we all know and love, and object databases that object-oriented developers are gravitating towards. When you use an ORDBMS the data resides in a relational data model and is manipulated using a query language. This means you can have the richness of mathematical relations to exploit useful techniques from set theory as with any RDBMS, but you also get features that allow to better match your object oriented code structure using containers, lists, user defined types and nested objects, just like you would with a Object Database. Digress complete.

Postgres started out, as all open source projects start out, in 1985 at the University of California Berkeley by Michael Stonebraker when he decided to create a post Ingres database, which he also led at the University of California Berkeley, to fix problems with the then current generation of databases including his own Ingres.

Postgres was born.

Version 1 of Postgres was released in 1989 and by 1993 there were so many users on Postgres that they overwhelmed the small Berkeley team with support and feature requests, so in 1994 the project ended and was released to the public using a MIT license for further development. Later that same year Andrew Yu and Jolly Chen replaced the PostQuel query language with Sql language and called postrgres95.  In 1996 Marc Fournier provided the first non-university version and began the work of stabilizing the code from Berkeley and the project was renamed to PostgreSQL. A dumb name was born.

Since 1997 a group of developers and volunteers from around the world maintain and enhance Postgres as The PostgreSQL Global Development Group.

thedudeI am the D

But I like Sql Server… Why did you choose Postgres?

Me too. That doesn’t mean I can’t like something else. Besides learning new things can be fun.

As a Microsoft developer I have become very comfortable with Sql Server because the Microsoft tooling makes it such an easy option and its a great database, but so is Postgres.  The decision comes down to a lot of factors when considering Sql Server or Postgres.  Here are some of the questions and answers I asked myself when making a decision to use Postgres over Sql Server for C3.  Spoiler alert the decision was hard to make as both are great platforms.

How would the team members, with various skill levels, take to learning a new database platform versus using something they already know?

I’ll be really honest I was super worried about this issue.  Our team consists of a bunch of devs writing mainly c# and javascript with no one, including myself, who I would consider a database expert.  If I made a decision to pull SQL Server out from under them would I have all out revolt, or a whole bunch of developers struggling with a new technology?

Turns out most of the things, if not all, you can do in Sql Server you can do in Postgres with fairly minor tweaks to syntax, and if you get a good management and development tool even these things are smoothed over.  What I found, for me at least, moving to Postgres felt very natural. If I wanted stored procedures they were there for me in the form of functions with a lot of very cool things like immutable, stable or volatile functions. If I wanted views they were there for me.  I decided to use plpgsql for the query language and for the most part it’s extremely similar to T-SQL.  Data Definition tasks were easy to implement, and once again a good management and development tool makes this a non issue. Setting up the security model to control access to objects was very natural once I figured out both users and roles are defined as roles and I need to set my function up as Security Invoker.

The single biggest issue I struggled with was the built in data types.  Even this wasn’t all bad, but I did have the Postgres documentation open when defining my table structures so I could read through data types to figure out the correct type, which slowed me down. The other thing, that is both good and bad, is the sheer amount of built in functions. There are thousands of them.  You can look through them all, and the documentation is good, but familiarizing yourself with them will take time.

All in all this was a breeze, especially when using a good management and development tool to ease the learning curve, seems like a said this a few times, and would not hinder us from moving to Postgres.

Are we able to tolerate some learning curve where we get things wrong and need to fix it later?

All the above things being said I am a realist and I don’t know what I don’t know and there is a real possibility I could really screw something up that may cause a serious issue for our customers.  So if that happens can we tolerate it and I believe the answer is yes we can tolerate it.  I am sure if something goes really bad there will be a fair amount of arm chair quarterbacking and why we would pick something with unknowns when Sql Server is great. The good news is I believe in the team’s ability to correct and overcome issues quickly, and if you know me you know I have a pretty strong personality when it comes to technology so I can create cover for the team while we fix it.

It does mean we will probably be a little more conservative on estimates and our velocity of work may slow a bit as we proceed cautiously while we figure stuff out. The least expensive bug is the one you fix during development.

Since we are using AWS what gaps, if any, would we need to fill to use one database engine over another?

This for me was a challenging question because I really like SQL Server and the Enterprise edition is the bees knees; however, AWS with its Aurora offering is just too compelling when I asked myself this question.  Aurora fills most of the management gaps HA, DR, Security, Updates, and Scale, and it does it really well.  The RDS version of SQL Server will do these things but not as well, and unfortunately at a higher pure price, more on TOC later. So I am not sure there were any immediate gaps we would need to fill from a management perspective on either solution. It’s just that Aurora does it cheaper and better than pure RDS for SQL Server.

What is the development experience like from .Net and Visual Studio? Would we need to drastically change our workflow?

For our team we are used to using Visual Studio for c#, VSCode for javascript and web UI, and SQL Server Management Studio (SSMS) for database work in our development flow, so I wanted to replicate that as best as possible. We also like to use Dapper and we may use some ADO.Net here and there when speed is super important.  So I went to nuget and pulled in Npgsql as our ADO Postgres Provider, implemented its versions of SqlConnection and SqlCommand, and easy breezy cover girl. It just worked. From a c# perspective is a super small change.  The connection string looks a bit different than a SqlClient Provider for SQL Server, but not drastic.  I was able to use both Dapper and ADO.Net with only minor changes to our code base.

Now I wanted to manage my Postgres database and do some development against it.  I was able to find a plethora of good tools for Postgres, my favorite being Navicat. With Navicat I was able to visually design my cardinality map, create functions, create views, manage my re-indexing strategy and other maintenance tasks along with scheduling those tasks to execute, use a query profiler, view my query plans and make adjustments.  I could also do things like import and export data to and from Postgres, with some limitations (sad). It has intellisense and syntax help, which I really needed.

Essentially my workflow was just the same.  I did find myself missing a few options that I like in SSMS, but there are other tools that fills those gaps, or we could create our own management scripts to handle it for now yuck.

Database Projects

Some teams may like visual studio database projects instead of SSMS and there is a solution for Postgres if you want that functionality. Here it is https://www.devart.com/dotconnect/postgresql/

Would there be any noticeable performance issues with Postgres?

This was simple. NO.

Would there be any support issues with an open source database versus a licensed one?

To be really honest, I am still not sure about this one.  We are using Aurora Postgres so would Amazon help us with Postgres specific issues?  I would hope so.  Because we are on Aurora we are using an older version of Postgres and we do not control the actual engine so we couldn’t deploy Postgres specific hotfixes anyway. There are companies that provide support plans as Postgres itself does not provide any support, but those are expensive and I don’t believe they would apply with Aurora.  Besides, I can count on one finger how many times I have needed to open a support ticket with Microsoft on SQL Server that wasn’t my fault.  I will say the community support is very good for Postgres and most answers can be found on the Postgres forum or Stack Overflow.

For now, I am going to stick my head in the sand on this one and move forward. If it becomes an issue and Amazon can’t help us out I am sure we will figure something out.

What’s the tooling story for Postgres?

In a single word extensive.  Tooling support is not a problem, but picking the one for your team could be endless.  I looked at whole bunch of them for management and development and I narrowed it down to 3: Navicat, EMS, and SQL Maestro.  They all cost money, but I think they pay for themselves in the amount of time saved, especially when encountering a learning curve.  These tools really helped me get up and running quickly with Postgres.  Especially when designing a new database. I really like to visualize the cardinality between my data objects, and both of the tools I mentioned have a great diagramming feature.

There is another tool that I need to look at and have not yet, and that is a diagnostic tool for Postgres.  I want to be able to see a nice query analyzer for running queries with the ability to kill a query out of control.  I really like Idera Diagnostic Manager for SQL Server because it gives you features like a top 10 worst queries list, a Running Query Analyzer amd Immediate remediation capabilities, like kill process.  It also has a very data centric monitoring view.  I want a similar tool for Postgres. I was able to find one with a very friendly price tag called Select Star, but I have not at this time been able to put it through its paces.  The good news is that these tools exist.

For completeness we are using DataDog on our AWS production account and this tool does a lot for us from pure monitoring and alerting aspect, but does not cover what a good diagnostic manager can do.

Is Postgres a real big time database engine or am I going to be missing a lot of cool features that I’ve come to rely on with Sql Server Enterprise?

Simple and Yes it’s prime time. No it does the things l need like online indexing, replication, etc…

Am I going to make our current Sql Server dba’s my enemies?

Again full disclosure I was really worried about this one.  We have team of very knowledgeable SQL Server dba’s and I really like having that resource in house. The thought of me being stuck and having no one to ask questions to about our database was not very appealing; especially, if the relationship was going to be contentious.  Well good news. I spoke briefly to our main guy and he seemed really OK with it, and while he said he had no experience in Postgres, I got the impression he was willing to help out if he could. So I think we are OK on this one.

Am I willing to stake the life of this project on this database engine and stand by it?

Simple and Yes!

How much is the going to cost me over the lifetime?

This is interesting. With both systems you will have hard costs like instance price, HA, DR, Licensing, Tooling, DBA salaries and so on.  These hard costs are pretty easy to see and Postgres comes out ahead on this one hands down on the AWS platform.  The real question to me was the soft costs implied with Postgres. Things like:

  • Learning Curve
  • Support
  • Time to find Issues and Fix them
  • Politics

The answer I came up with is I don’t readily know the answer, at least not in hard metrics.  I mean every question I was asking above was me trying to figure this out, and put a TOC that is a comparable for SQL Server and Postgres in real numbers. I failed at that, or ran out time, one or the other.

However, I do have an answer as long as you take my instinct into consideration, and here it is  “I don’t think Postgres will cost us anymore than SQL Server, and has the potential to cost us less over the long term.”

Volkswagen Disclaimer to TOC

Your mileage may vary.

Lions and Tigers and Bears oh my. Postgres for the Microsoft Developer

dudedotnet

Here is a rundown of things I know are different.  Are there more things? Yes.  These are some of the things

  • There is no identity. if you want an auto incrementing key you use serial or bigserial
    • CREATE TABLE myTable(p1 serial);
  • Truncate does not reseed and sequence you will need to specify like this
    • Truncate with Reseed
      TRUNCATE TABLE tablename RESTART IDENTITY
  • There are no stored procedures you use functions instead
    • Set the function to either volatile for DML, Stable for typical selects or immutable if its a select and the data does not change
    • Set the function security to Security Invoker so you can use roles of the caller and set the privileges for least privilege
  • Objects are case sensitive
  • Qualify objects with double quotes and use the full path “schema”.”table”
  • Remove permissions from the Public Role to your secure objects
  • Use CREATE OR REPLACE FUNCTION for functions
  • To get an estimated query plan use EXPLAIN to get an actual query plan use EXPLAIN ANALYZE
  • STRICT in a function will return null when any of its arguments are null
  • CALLED ON INPUT in a function mean the function writer needs to handle null
  • Statements need to be terminated with semicolon no exceptions
  • There is no nvarchar only varchar because Postgres is unicode by default and you will need to set a max limit
  • Use limit instead of top and it comes at the end of a statement
    • Use offset to skip
    • Limit/Offset
      select * from "schema"."table" limit 10 offset 5
  • Use timestamp instead of datetime
  • You can create an enum object
    • Enum
      CREATE TYPE myEnum AS ENUM('Good', 'Bad', 'Ugly');
  • Use boolean for true/false/unknown and it returns the letters t/f or null
  • Bit is used  for bit masks and there is a precision you must set a precision like
    • Bit Data Type
      CREATE TABLE myTable (p1 BIT(3));
  • Use :: to cast
    • CAST
      SELECT '1' :: INTEGER
  • There is an array data type
    • Array
      CREATE TABLE myTable (id serial PRIMARY KEY, phones[]);
  • There is an hstore datatype for storing key value pairs
    • hstore
      CREATE TABLE myTable( id serial Primary Key, facets hstore);
      INSERT INTO myTable(facets) VALUES ('"myKey" => "Value"');
      SELECT * FROM myTable WHERE facets -> 'myKey' = 'Value';
  • There is a JSONB data type you can use for storing and querying JSON docs.
    • JSONB is slower to save, but match faster to query as opposed the the JSON datatype
    • JSONB is indexable
    • jsonb
      -> returns values in the form of json
      --> returns values in the form of text
      CREATE TABLE myTable (id serial primary key, data jsonb);
      INSERT INTO myTable ('{ "key":"value" }');
      SELECT * FROM myTable WHERE data -> 'key' = 'value';
  • Use tsvector or full text indexing
    • use built in function to_tsquery to search in where clause
    • use built in function tsvector_update_trigger to set up search field facets
    • @ as into statement for where clause with to_tsquery function in where clause

    • Full Text Index
      CREATE TABLE myTable (id serial primary key, name varchar(64), email varchar(256), searchField tsVectory not null);
      CREATE TRIGGER updateSearchField
      BEFORE INSERT OR UPDATE ON myTable
      FOR EACH ROW EXECUTE PROCEDURE
      tsvector_update_trigger(searchField, 'pg_catalog.english', name, email);
      SELECT * FROM myTable WHERE searchField @@ to_tsquery('someValue');
  • Use UUID to create unique identifier

Aurora boreAWS or A naturally occurring electrical phenomenon

Aurora helps us manage our database engine for things like HA, DR, Scale, and Security. All of these things you can do with SQL Server Enterprise on premise, but unfortunately the RDS support, at least at this time is a bit behind for SQL Server.  This  makes using Aurora more compelling in AWS. You probably could go straight IAAS here and spin up EC2 and install and maintain SQL Server, and that may be a viable option, especially if you have experienced DBA’s and they are going to own it.  For us though, our little team owned the db soup to nuts and we didn’t want to take this much on.

Let’s begin with HA and DR.  Aurora handles HA and DR by creating a cluster of servers containing the primary (read/write) instance and read replicas (read) instances and self healing storage for you data across availability zones. Aurora continuously backs up your db to S3, you get to set the retention period. If any of the instances fail Aurora will spin up another one to replace it.  If your read/write instance fails one of the read instances will be promoted to the read/write master and another read instance will be created to take its place.  The failover for read/write master to one of the read instances is pretty quick, but spinning up the new replacements can take up to 20 minutes. You can create up to 15 read instances in your cluster or you can connect to on premise replicas if you want. You can setup your replicas with a priority and set a priority tier then if the read/write instance fails an instance with the priority tier you selected will be set to the read/write instance. If you don’t choose anything at all Aurora will pick for you. If for some reason you have no read instance, not good, Aurora will create a new read/write instance on failure.  Your data is in a virtual cluster volume and is stored up to 6 times across multiple availability zones.

Scalability we touched on a bit when talking about HA, but there is more. Aurora has tweaked the db engine a bit so its actually faster then a typical Postgres install, they claim 3x faster, I’ll take their word for it. To scale read capacity you create more read instances and spread out your read load. When you want to connect to a read instance you will need to call the read endpoint and it will load balance the read replicas. You can have up to 15 read replicas across AWS and on premise. You can scale up or down your EC2 instances on the fly with no interruption. The data storage for Aurora will automatically scale your storage up to 64TB based on your data without any service interruption.

Security is provided directly by Aurora in 2 ways. The first is IAM. You use IAM for management tasks for the cluster itself. You can control access to the Aurora management features using IAM Policy. For instance, if we had a policy that limited management functions to a specifically named cluster and that policy was associated to either a user or role that you login to the console with you would be limited to that specific cluster, or specific feature on that specific cluster. The second uses Virtual Private Cloud or (VPC) and Security Groups to restrict access to devices, Ip ranges and EC2 instances by allowing or denying them the ability to open connections on the supplied endpoints.

Who’s the tool

nihiliststoolsTooling for Postgres is extensive. The hard part is picking one of them. For me the tooling falls into 2 categories. The first is management and development, and the second is diagnostics and monitoring. Let’s start with management and development.

Management and Development

pgAdmin 4

There is free tool called pgAdmin 4, that is very popular,  and this is where I started.  PgAdmin can be deployed as either a desktop application or a website which nice because you can centralize the tool and control access using its own login and user management dialogs. PgAdmin is by far the lowest level and depending on how you approach it you can either love or hate it. To be honest, when I first started using Postgres I wasn’t a fan because it expected me to know a whole lot that I just did not know.  Also when creating my database I like to use a visual designer so I can see the relationships and think about how to query to get information pgAdmin 4 doesn’t have this at all. As I have learned things, and now that the basics of my db are created, I do find myself being annoyed by it less, and if you are pretty knowledgeable about Postgres, aren’t creating a database from scratch,  and you prefer manual scripting I think this tool is decent.  In fact I sometimes use when I really want to get low level as I can see exactly how every single object in my database is scripted and change it. Kinda dangerous, as I have already completely hosed my database once, but handy.

Navicat

As I was stumbling around the web looking for answers on how to do stuff and watching Pluralsight videos, this tool kept coming up. So I downloaded the 14 day trial.  I am glad I did.  I do actually prefer to hand type most of the scripts I write and this tool seemed like a nice balance of help when I needed it and staying out my way when I didn’t. As a Postgres noob I liked the code snippets it offers to get me jump started on how to do common tasks.  I mean I almost always have Postgres documentation open, but reading can be tedious, I mean let’s be honest the last thing I want to do is read the directions. There is a really nice designer for building the data model. It just draws a nice diagram you save on your machine and when I was figuring out my structures and cardinality I found it super handy. You can save the diagram to pdf, png or svg if you need to share it out to other folks.  You can actually create or alter you database from the model using the built in schema synchronization tool if you want too.  While we are on synchronization it does offer a pretty darn good schema and data synchronization tool to help your create diff files from one db version to the next. This could be nice for deployments and versioning as we move forward. There is also task scheduler to automate tasks on schedule like re-indexing, backups, data copy etc.. It will do simple data import and export tasks which is really nice when you have bought a database in csv format and you need to put it in a proper structure for use. Lastly, it has nice profiling tool that they call monitor to do similar things as to running Sql Profiler from SQL Server, but probably not as good.  It does have price tag of $300 per user or you can buy it for the entire site. All in all this tool really helped me out. It has a nice interface and helped in all the right places.

Sql Maestro All in One Bundle

This tool has a lot of nice features. The UI is a bit clunky as compared with Navicat, and it is broken down into 4 different tools, but I am OK with that, but I do wish you could launch the tools from a single application. It does all of the same things as Navicat plus it adds in things to control the actual db server itself. If you happen to be one of those who stores BLOB’s in your db it has nice viewer for looking at those blobs.  It has a really, really nice database extractor and SQL generator tool. You can find and kill processes using its process browser. It has more ETL features than Navicat to move data from just about any database. Here is the real kicker I can’t figure out to synchronize the data schema with this tool.  Maybe its possible, but I couldn’t find it easily and that is a real negative. It more pricey than Navicat is $450 per seat or you can buy a site license for $2000.

EMS Sql Management Studio for PostgreSQL

This is the most expensive tool on the list, but it has the most features and is the closest to SQL Server Management Studio. Visually it very close to SQL Maestro and to be honest when I was evaluating the tool it became confusing as to with tool I was using.  It has UI options around all of its functionality as well as a companion console host.  The console host was intriguing especially if you prefer working from the command line or you want to incorporate it into build and deploy process.  It does have some source control support, but only TFS and SVN, and we use GitHub (sad). It the best ETL and Task tool of the lot and it has a Test Data Generator that I didn’t use. Like I said before the full studio version is pricey at $520 per seat, but you can break the tool up into smaller bits to help ease the cost.

The Ugly

Right now I am looking at Navicat as the leader in the space because at the price point it give us most of the things we need right now and then augmenting as we go with either Sql Maestro or EMS. My mind may change if we decide we need the console feature supplied by EMS then I may shift into buying that tool in pieces, or just for our build and deployment needs.

Diagnostics and Monitoring

First off I have spent almost no time looking at these tools.  I know we are going to use DataDog for the bulk of our monitoring and alerting story across all of the technology resources, but I really I am trying to hone down into tool that is Postgres specific.

I do know the kinds of things I would like this tool to be able to do.  Here is a list:

  • Actively monitor, baseline, and report running queries
  • Allow me the ability to kill run away processes and queries
  • Identify current open sessions and who owns them with the ability to kill them
  • Generate top 10 worst query lists
  • Allow me the ability to view query plans of running queries
  • Database usage, index fragmentation analysis
  • Simple dashboard showing cluster and instance health based on actual offensive metrics with user defined thresholds
  • Some alerting

I have spent some time in my past life being an accidental dba, not a job I want, and I used a tool from Idera called Diagnostic Manager that really helped a dummy like me. I am looking for that tool’s equivalent. So far the search has not been good. I was able to find a couple that look promising. Here is the list:

  • Select Start
  • Foglight
The Ugly

Right now I am reading about and liking Select Start because it will monitor my DynamoDb instances as well as my Aurora Postgres instances across environments in a single pane. Right now I am playing a bit of window whack a mole and not being able to see everything at once is frustrating.  The price for the Enterprise version of Select Star $70/database server per month.  It may get a bit pricey depending on how many instances we hook up, but right now I am only concentrating on our production databases for both DynamoDb and Aurora. Stay tuned for more developments on which tool we choose.

Definitions and Stuff

Settheorydonny

Some of these I knew, some of these I knew once and forgot, some I just looked up, but they are all extremely accurate.

  • Set Theory – a branch of mathematical logic that studies collections of objects and provides the basis for database concepts such as Union, Intersect, Set Difference, Symmetric Difference, Cartesian Product and Power Set
  • Union –  in set theory the idea that given sets A and B the output is the members that are both, A or B  – A[1,2,3] B[2,3,4] output [1,2,3,4]
  • Intersect – in set theory of the sets A and B all objects that are in A and B – A[1,2,3] B[2,3,4] output [2,3]
  • Set Difference – in set theory of set A and B all members of A that are not members of B – A[1,2,3] B[2,3,4] output [1]
  • Symmetric Difference – in set theory of sets A and B is the set of all objects that are member on one of sets but not both – A[1,2,3] B[2,3,4] output [1, 4]
  • Cartesion Product – in set theory of A and B the result is all possible ordered pairs. A[1,2,3] B[red, white] output [(1, red), (1, white), (2, red), (2, white)]
  • Power Set – in set theory of A all members who are possible subsets of A – A[1, 2]  output [ [], [1], [2], [1, 2] ]
  • MVCC – Multi-version Concurrency Control is how Postgres maintains data consistency. It does these by create a data snapshot on each Sql Statement as it was at the point in time. This prevents statements from viewing inconsistent data produced by concurrent transaction processing. The process has the advantage of not requiring traditional database locks so there is no contention between reads and writes thereby increasing database performance.
  • Transaction Isolation – sets the measure of extent to which the transaction succeeds.
  • Dirty Read – current transaction can read data written by a concurrent uncommitted transaction
  • NonRepeatable Read – current transaction re-reads data it has previously read and finds modifications performed by another committed transaction
  • Phantom Read – current transaction re-executes a query returning a set of rows that satisfies a condition and finds that the set of rows satisfying the condition has changed due to another committed transaction
  • Serialization Anomaly – current transaction successfully committed a group of transactions that is inconsistent with the possible orderings when running the same transactions one at a time
  • Read Uncommited –  Postgres does not support this because of is MVCC concurency model and if chosen will behave exactly as Read Committed.
  • Read Committed – The default in Postgres and will not allow Dirty Reads, but will allow Nonrepeatable reads, Phantom Reads, and Serialization Anomalies
  • Repeatable Read- Will not allow dirty reads, NonRepeatable Reads, Phantom Reads (Postgres only), but will allow serialization anomalies
  • Serializable – The most restrictive isolation level. Will not allow Dirty Read, NonRepeatable Reads, Phantom Reads, or Serialization Anomalies
  • ORDBMS – Object Relational Database Management System.  Why are you reading this? I described at beginning of document.
  • Rules – allows alternatives actions to be performed on DML statements as a command transformation system which happens before execution.  They can be useful to create an illusion of DML on a view
  • DDL – Data Definition language CREATE, ALTER  statements
  • DML – Data Manipulation Language INSERT, UPDATE, DELETE
  • OLTP – Online transaction processing for typical data entry and retrieval
  • OLAP – Online analytical processing used for answering 3 dimensional queries quickly and is typically associated with business intelligence
  • Object Identifier – In Postgres system tables use object identifiers or OIDs as primary keys.  They are typically discouraged for user defined table due to size limitations.
  • Function – a defined unit of work that may or may produce an output and take inputs

Things you may find troubling

This is a list of things that I am keeping in the back of my mind.

  • I don’t really understand how support will work if something is wrong with Postgres itself
  • We probably need some kind of diagnostic manager for Postgres like Idera or Sql Sentry
Standing on the Shoulders of Giants

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: