PostgreSQL: Category Archive

Posts about the PostgreSQL database

Tuesday, March 26, 2024
  myWebLog v2.1

myWebLog v2.1 is now available. There are some great new features in this release.

  • Full podcast episode chapter support is now available. Chapters can be created using a graphical interface, and will be served for applicable episodes by adding ?chapters to the URL of the post for that episode. The documentation has a detailed description of this feature.
  • Redirect rules can now be specified within myWebLog. While it has always supported prior links for pages and posts, this allows arbitrary rules, such as pages that direct to other sites, maintaining category archive links, etc. Its documentation also explains all about the feature and its options.
  • Canonical domains can now be enforced within myWebLog. Adding the configuration for BitBadger.AspNetCore.CanonicalDomains will help enforce the use of www. (or absence of it), as an example.
  • Docker images can now be built within the source. The plan for 2.1 was to provide those images from the outset, but rather than relying on an external registry, we plan to stand up our own for distribution of public container images. If no code changes are required in myWebLog before that registry is available, we will release v2.1 images with this current build; if not, we will do a point release for them.
  • The version of htmx injected for the “auto htmx” functionality has been updated to v1.9.11.

In addition to these features, a decent amount of the development of this version included full integration tests with all three data storage backends. SQLite, PostgreSQL, and RethinkDB are all verified to give the same results for all data operations. (NOTE: SQLite users will need to back up using v2, then restore to an empty database using v2.1; this will update the data representation used in several tables.)

Finally, there are downloads for this release that target .NET 8, 7, and 6 - all the currently-supported versions of the .NET runtime. (The Docker images target .NET 8, which does not matter because, well, Docker.)

Head on over to the release page to get the binaries for your system! Feel free to participate in the project over on GitHub.

(Note: the link above now points to v2.1.1, which fixed an issue with PostgreSQL upgrades between v2 and v2.1. Upgraders from v2 can safely (and are encouraged to) go straight to 2.1.1.)

Categorized under , , , ,
Tagged , , , , , , , , , , , ,

Friday, August 31, 2018
  A Tour of myPrayerJournal: The Data Store

NOTES:

  • This is post 6 in a series; see the introduction for all of them, and the requirements for which this software was built.
  • Links that start with the text “mpj:” are links to the 1.0.0 tag (1.0 release) of myPrayerJournal, unless otherwise noted.

Up to this point in our tour, we've talked about data a good bit, but it has all been in the context of whatever else we were discussing. Let's dig into the data structure a bit, to see how our information is persisted and retrieved.

Conceptual Design

The initial thought was to create a document store with one document type, the request. The request would have an ID, the ID of the user who created it, and an array of updates/records. Through the initial phases of development, our preferred document database (RethinkDB) was going through a tough period, with their company shutting down; thankfully, they're now part of the Linux Foundation, so they're still around. RethinkDB supports calculated fields in documents, so the plan was to have a few of those to keep us from having to retrieve or search through the array of updates.

We also considered a similar design using PostgreSQL's native JSON support. While it does not natively support calculated fields, a creative set of indexes could also suffice. As we thought it through a little more, though, this seemed to be over-engineering; this isn't unstructured data, and PostgreSQL handles max-length character fields very well. (This is supposed to be a “minimalist” application, right?) A relational structure would fit our needs quite nicely.

The starting design, then, used 2 tables. request had an ID and a user ID; history had the request ID, an “as of” date, a status (created, updated, etc.), and the optional text associated with that update. Early in development, the journal view brought together the request/user IDs along with the latest history entry that affected the text of the request, as well as the last date/time an action had occurred on the request. When the notes capability was added, it got its own note table; its structure was similar to the history table, but with non-optional text and without a status. As snoozing and recurrence capabilities were added, those fields were added to the request table (and the journal view).

The final design uses 3 tables, 2 of which have a one-to-many relationship with the third; and 1 view, which provides the calculated fields we had originally planned for RethinkDB to calculate.

Database Changes (Migrations)

As we ended up using 3 different server environments over the course of this project, we ended up writing a DbContext class based on our existing structure. For the Node.js backend, we created a DDL file (mpj:ddl.js, v0.8.4+) that checked for the existence of each table and view, and also had the SQL to execute if the check failed. For the Go version (mpj:data.go, v0.9.6+), the EnsureDB function does a similar thing; looking at line 347, it is checking for a specific column in the request table, and running the ALTER TABLE statement to add it if it isn't there.

The only change that was required since the F#/Giraffe backend has been in place was the one to support request recurrence. Since we did not end up with a scaffolded EF Core initial migration/model, we simply wrote a SQL script to accomplish these changes (mpj:sql directory).1

The EF Core Model

EF Core uses the familiar DbContext class from prior versions of Entity Framework. myPrayerJournal does take advantage of a feature that just arrived in EF Core 2.1, though - the DbQuery type. DbSets are collections of entities that generally map to an underlying database table. They can be mapped to views, but unless it's an updateable view, updating those entities results in a runtime error; plus, since they can't be updated, there's no need for the change tracking mechanism to care about the entities returned. DbQuery addresses both these concerns, providing lightweight read-only access to data from views.

The DbContext class is defined in Data.fs (mpj:Data.fs), starting in line 189. It's relatively straightforward, though if you have only ever seen a C# model, it's a bit different. The combination of val mutable x : [type] and the [<DefaultValue>] attribute are the F# equivalent of C#'s [type] x; declaration, which creates a variable and initializes reference types to null. The EF Core runtime provides these instances to their setters (lines 203, 206, 209, and 212), and the application code uses them via the getters (a line earlier, each).

The OnModelCreating overridden method (line 214) is called when the runtime first creates its instance of the data model. Within this method, we call the .configureEF function of each of our database types. The name of this function isn't prescribed, and we could define the entire model without even referencing the data types of our entities; however, this technique gives us a “configure where it's defined” paradigm with each entity type. While the EF “Code First” model creates tables that don't need a lot of configuring, we must provide more information about the layout of the database tables since we're writing a DbContext to target an existing database.

Let's start out by taking a look at History.configureEF (line 50). Line 53 says that we're going to the table history. This seems to be a no-brainer, but EF Core would (by convention) be expecting a History table; since PostgreSQL uses a different syntax for case-sensitive names, these queries would look like SELECT ... FROM "History" ..., resulting in a nice “relation does not exist” error. Line 54 defines our compound key (requestId and asOf). Lines 55-57 define certain properties of the entity as required; if we try to store an entity where these fields are not set, the runtime will raise an exception before even trying to take it to the database. (F#'s non-nullability makes this a non-issue, but it still needs to be defined to match the database.) Line 58 may seem to do nothing, but what it does is make the text property immediately visible to the model builder; then, we can define an OptionConverter<string>2 for it, which will translate between null and string option (None = null, Some [x] = [x]). (Lines 60-61 are left over from when I was trying to figure out why line 62 was raising an exception, leading to the addition of line 58; they could safely be removed, and will be for a post-1.0 release.)

History is the most complex configuration, but let's take a peek at Request.configureEF (line 126) to see one more interesting technique. Lines 107-110 define the history and notes collections on the Request type; lines 138-145 define the one-to-many relationship (without a foreign key entity in the child types). Note the casts to IEnumerable<x> (lines 138 and 142) and obj (lines 140 and 144); while F# is good about inferring types in a lot of cases, these functions are two places it is not. We can use the :> operator for the cast, because these types are part of the inheritance chain. (The :?> operator is used for potentially unsafe casts.)

Finally, the attributes above each record type need a bit of explanation; each one has [<CLIMutable; NoComparison; NoEquality>]. The CLIMutable attribute creates a no-argument constructor for the record type, which the runtime can use to create instances of the type. (The side effect is that we may get null instances of what is expected to be a non-null type, but we'll look at dealing with that a bit later.) The NoComparison and NoEquality attributes keep F# from creating field-level equality and comparison methods on the types. While these are normally helpful, there is an edge case where they can raise NullReferenceExceptions, especially when used on null instances. As these record types are simply our data transfer objects (both from SQL and to JSON), we don't need the functionality anyway.

Reading and Writing Data

EF Core uses the “unit of work” pattern with its DbContext class. Each instance maintains knowledge of the entities it's loaded, and does change tracking against those entities, so it knows what commands to issue when .SaveChanges() (or .SaveChangesAsync()) is called. It doesn't do this for free, though, and while EF Core does this much more efficiently than Entity Framework proper, F# record types do not support mutation; if req is a Request instance, for example, { req with showAfter = 123456789L } returns a new Request instance.

This is the problem whose solution is enabled by lines 227-233 in Data.fs. We can manually register an instance of an entity as either added or modified, and when we call .SaveChanges(), the runtime will generate the SQL to update the data store accordingly. This also allows us to use .AsNoTracking() in our queries (lines 250, 258, 265, and 275), which means that the resultant entities will not be registered with the change tracker, saving that overhead. Notice that we don't specify that on line 243; since Journal is defined as a DbQuery instead of a DbSet, we get change-tracking-avoidance for free.

Generally speaking, the preferred method of writing queries against a DbContext instance is to define extension methods against it. These are static by default, and they enable the context to be as lightweight as possible, while extending it when necessary. However, since this context is so small, we've created 6 methods on the context that we use to obtain data.

If you've been reading along with the tour, we have already seen a few API handler functions (mpj:Handlers.fs) that use the data context. Line 137 has the handler for /api/journal, the endpoint to retrieve a user's active requests. It uses .JournalByUserId(), defined in Data.fs line 242, whose signature is string -> JournalRequest seq. (The latter is an F# alias for IEnumerable<JournalRequest>.) Back in the handler, we use db ctx to get the context (more on that below), then call the method; we're piping the output of userId ctx into it, so it gets its lone parameter from the pipe, then its output is piped to the asJson function we discussed as part of the API.

Line 192, the handler for /api/request/[id]/history, demonstrates both inserting and updating data. We attempt to retrieve the request by its ID and the user ID; if that fails, we return a 404. If it succeeds, though, we add a history entry (lines 201-207), and optionally update the showAfter field of the request based on its recurrence. Finally, the call on line 212 commits the changes for this particular instance. Since the .SaveChanges[Async]() methods return the number of records affected, we cannot use the do! operator for this; F# makes you explicitly ignore values you aren't either returning or assigning to a name. However, defining _ as a parameter or name demonstrates that we realize there is a value to be had, we just are not going to do anything with it.

We mentioned that CLIMutable record types could be null. Since record types cannot normally be null, we cannot code something like match [var] with null -> ...; it's a compiler syntax error. What we can do, though, is use the box operator. box “boxes” whatever value we have into an object container, where we can then check it against null. The function toOption in Data.fs on line 11 does this work for us; throughout the retrieval methods, we use it to return options for items that are either present or absent. This is why we could do the match statement in the /api/request/[id]/history handler against Some and None values.

Getting a DbContext

Since Giraffe sits atop ASP.NET Core, we use the same technique; we use the .AddDbContext() extension method on the IServiceCollection interface, and assign it when we set up the dependency injection container. In our case, it's in Program.fs (mpj:Program.fs) line 50, where we also direct it to use a PostgreSQL connection defined by the connection string “mpj”. (This comes from the unified configuration built from appsettings.json and appsettings.[Environment].json.) If we look back at Handlers.fs, lines 45-47, we see the definition of the db ctx call we used earlier. We're using the Giraffe-provided GetService<'T>() extension method to return this instance.

 

Our tour is nearing its end, but we still have a few stops to go. Next time, we'll look at how we generated documentation to tell people how to use this app.


1 Writing this post has shown me that I need to either create a SQL creation script for the repo, or create an EF Core initial migration/model, so the database ever has to be recreated from scratch. It's good to write about things after you do them!

2 This is also a package I wrote; it's available on NuGet, and I also wrote a post about what it does.

Categorized under , , , ,
Tagged , , , , , , , , , , , , , , , , , , , , ,

Friday, August 24, 2018
  A Tour of myPrayerJournal: Introduction

Recently, we released version 1.0 of myPrayerJournal, a minimalistic prayer journaling application. This series aims to provide a tour of the code, with several stops along the way:

From a technical perspective, this application was going to be a learning experience. We knew we wanted to use a Single Page Application (SPA) framework with an API; we'd built APIs before, but had yet to build a SPA. For front-end frameworks, we started with Angular, went through Aurelia and Elm, then decided on Vue. For the back-end API, we started with Suave, then went live on Node.js with Koa; later, we moved it to Go, and after .NET Core 2.1 was released, landed on Giraffe. The “learning experience” part was a success; through all these attempts, we utilized 5 different languages and 3 different database access techniques.

To understand the requirements, a short explanation of the process will help. “Prayer journaling” is a discipline where a person will write down the things for which they are praying; this provides a defined list to help guide their prayer, and helps them not forget things. Then, as the situation changes, they can record updates, through to the resolution of the situation (also called the request being “answered”). This discipline not only helps to focus efforts, it also provides a record of requests and answers. Although people have successfully used a notebook, or something similar, for a long time, that approach does have some downsides:

  • For long term requests, you can run out of room for updates.
  • A physical journal can only be in one place at one time.
  • Answered requests coexist with unanswered requests, so you have to flip pages past them.
  • Books can end up under stacks of other things, falling victim to “out of sight, out of mind.”

Looking to address some of those, the initial requirements started as the first three bullets below. The remaining requirements emerged through using the application as it was being developed.

  • List unanswered requests, in a way that they can be marked as prayed or answered, and be updated
  • List answered requests, and allow full requests (and their history) to be viewed
  • Do the above in a way that will not be distracting
  • Allow notes to be recorded for a request; not every update on a situation requires a change in the verbiage of the request
  • Allow requests to be “snoozed” (removed from the journal, with a specified date when they will reappear), and list snoozed requests so that the snooze can be expired (returning the request to the journal immediately)
  • Allow requests to be prioritized (this became the request recurrence feature)

Armed with these requirements, we will pick up next time with a look at the Vue front end.

Categorized under , , , ,
Tagged , , , , , , , , , , , , , , ,

Saturday, October 22, 2011
  Database Abstraction v0.8

When we began developing C# web applications, we found ourselves in the position of determining what the best way of accessing the database is. We evaluated several technologies…

  • NHibernate - May be very good, but it was overkill for what we were trying to do.
  • LINQ to SQL - This brings C#'s LINQ (Language-Integrated Query) to SQL databases. You create database-aware classes and use LINQ to select from collections, which LINQ to SQL converts to database access. This is a good abstraction, but it relies on SQL Server; as we typically deploy to PostgreSQL, this didn't work. (We also couldn't get DBLinq, a database-agnostic implementation, to work.)
  • ADO.NET - This is the tried-and-true database access methodology, released as part of the initial release of the .NET framework. The downside to this is that it encourages SQL in the code at the point of data retrieval; it does not provide a clean separation of data access from data processing.
  • EF Code First - This didn't exist; it's also very SQL Server-centric. Not faulting Microsoft for that, especially since they release a free version now; but, as we deploy on Linux, until they release a Linux version, SQL Server is not an option.

With our PHP applications, we had written a database service that read queries from XML files. Then, queries were accessed by name, with parameters passed via arrays. The one thing that ADO.NET has that was useful was the fact that it is based on interfaces. This means that if we wrote something that exposed, manipulated, and depended on IDataConnection (instead of SqlConnection, the SQL Server implementation of that interface), we could support any implementation of database. The SqlDataReader implements IDataReader as well. Our solution was becoming apparent.

Over time, we developed what is now the Database Abstraction project hosted on CodePlex (UPDATE: migrated project to GitHub). On Thursday, we released the first public release (although the DLLs are in the repository, and are usually current at every commit). If you are looking for a way to separate your data access from the rest of your code, or want a solution that's database-agnostic, check it out. It supports SQL Server, MySQL, PostgreSQL, SQLite, and ODBC connections *, using the data provider name to derive the proper connection to implement. There is also a Mock implementation to support unit tests; this mock can provide data, providing a useful way to test methods. Finally, there is a membership and role provider based on Database Abstraction; simply configure the connection string, create the database tables, and away you go! **

A pre-release version is already in production use in our PrayerTracker application, and others are being built around it. If this sounds like something that could help your project, certainly feel free to check it out!

* Oracle is omitted from this list, as their DLL had redistribution restrictions; this meant that the source code repository, upon check-out, would have build errors. There may be an Oracle implementation in the future (it would be trivial), but there is not one now.

** The membership and role providers are untested; they will be tested and tweaked by version 0.9.

Categorized under , , , , ,
Tagged , , , , ,

Wednesday, October 29, 2008
  Oracle SQL Developer Debian Package

Oracle SQL Developer is a Java-based tool that provides a graphical interface to a database. While it's main focus is Oracle (of course), it can be hooked up, via JDBC, to many other databases, such as MySQL, PostgreSQL, and SQL Server. It's similar to Toad, but is provided by Oracle at no cost.

Oracle provides SQL Developer in either an RPM, or a generic binary install. I like the ability to manage packages, but I've never had much luck at getting RPM to run on Ubuntu. I downloaded the RPM file, and, using alien, I converted the package to a .deb package (Debian package format) and installed it. It worked like a charm!

I haven't tested it with gcj, but using Sun's Java 6 update 7 from the Ubuntu repositories, it ran just fine. After you install the package, do a directory list on /usr/lib/jvm. You're looking for the Sun JDK - if it's installed, you'll have a symlink java-6-sun that points to java-6-sun-1.6.0.07. Once you've determined the location of the JDK, run “sqldeveloper” from the command line - the program will prompt you for the path to your JDK. Enter it (probably /usr/lib/jvm/java-6-sun) and you're good to go. (You have to install the package as root - but, for the rest of these steps, use your normal user, not root, as this puts settings in a .sqldeveloper directory off your home directory.) The package installs an icon in the “Programming” or “Development” group. Once you've told it where the JDK is, you can use this to launch it.

Download SQL Developer 1.5.1 Debian Package

Categorized under , , , ,
Tagged , , , , ,

Friday, March 28, 2008
  A Handy PHP Backup Script

I found a script over on the Lunarpages Forums about using PHP to back up your site. I have taken it, modified it a little, beefed up the documentation a lot, and am now posting it here. You can copy and paste it from below to customize it for your own use.

<?php
/**
 * Generic Backup Script.
 *
 * To configure this script for your purposes, just edit the parameters below.
 * Once you have the parameters set properly, when the script executes, it will
 * create an archive file, gzip it, and e-mail it to the address specified.  It
 * can be executed through cron with the command
 *
 * php -q [name of script]
 *
 * You are free to use this, modify it, copy it, etc.  However, neither DJS
 * Consulting nor Daniel J. Summers assume any responsibility for good or bad
 * things that happen when modifications of this script are run.
 *
 * @author Daniel J. Summers <daniel@djs-consulting.com>
 */

// --- SCRIPT PARAMETERS ---

/*  -- File Name --
	This is the name of the file that you're backing up, and should contain no
	slashes.  For example, if you're backing up a database, this might look
	something like...
$sFilename = "backup-my_database_name-" . date("Y-m-d") . ".sql"; */
$sFilename = "backup-[whatever-it-is]-" . date("Y-m-d") . ".[extension]";

/*  -- E-mail Address --
	This is the e-mail address to which the message will be sent. */
$sEmailAddress = "[your e-mail address]";

/*  -- E-mail Subject --
	This is the subject that will be on the e-mail you receive. */
$sEmailSubject = "[something meaningful]";

/*  -- E-mail Message --
	This is the text of the message that will be sent. */
$sMessage = "Compressed database backup file $sFilename.gz attached.";

/*  -- Backup Command --
	This is the command that does the work.

  A note on the database commands - your setup likely requires a password
	for these commands, and they each allow you to pass a password on the
	command line.  However, this is very insecure, as anyone who runs "ps" can
	see your password!  For MySQL, you can create a ~/.my.cnf file - it is
	detailed at //dev.mysql.com/doc/refman/4.1/en/password-security.html .
	For PostgreSQL, the file is ~/.pgpass, and it is detailed at
	//www.postgresql.org/docs/8.0/interactive/libpq-pgpass.html .  Both of
	these files should be chmod-ded to 600, so that they can only be viewed by
	you, the creator.

  That being said, some common commands are...

  - Backing Up a MySQL Database
$sBackupCommand = "mysqldump -u [user_name] [db_name] > $sFilename";

  - Backing Up a PostgreSQL Database
$sBackupCommand = "pg_dump [db_name] -h localhost -U [user_name] -d -O > $sFilename";

  - Backing Up a set of files (tar and gzip)
$sBackupCommand = "tar cvf $sFilename [directory]

  Whatever command you use, this script appends .gz to the filename after the command is executed.  */
$sBackupCommand = "[a backup command]";

// --- END OF SCRIPT PARAMETERS ---
//
// Edit below at your own risk.  :)

// Do the backup.
$sResult = passthru($sBackupCommand . "; gzip $sFilename");
$sFilename .= ".gz";

// Create the message.
$sMessage = "Compressed database backup file $sFilename attached.";
$sMimeBoundary = "<<<:" . md5(time());
$sData = chunk_split(base64_encode(implode("", file($sFilename))));

$sHeaders = "From: $sEmailAddress\r\n"
		. "MIME-Version: 1.0\r\n"
		. "Content-type: multipart/mixed;\r\n"
		. " boundary=\"$sMimeBoundary\"\r\n";

$sContent = "This is a multi-part message in MIME format.\r\n\r\n"
		. "--$sMimeBoundary\r\n"
		. "Content-Type: text/plain; charset=\"iso-8859-1\"\r\n"
		. "Content-Transfer-Encoding: 7bit\r\n\r\n"
		. $sMessage."\r\n"
		. "--$sMimeBoundary\r\n"
		. "Content-Disposition: attachment;\r\n"
		. "Content-Type: Application/Octet-Stream; name=\"$sFilename\"\r\n"
		. "Content-Transfer-Encoding: base64\r\n\r\n"
		. $sData."\r\n"
		. "--$sMimeBoundary\r\n";

// Send the message.
mail($sEmailAddress, $sEmailSubject, $sContent, $sHeaders);

// Delete the file - we don't need it any more.
unlink($sFilename);
Categorized under , ,
Tagged , , ,