Thursday, December 21, 2017

De-identifying Data for Confidentiality - Part II

For part I, see De-identifying Data for Confidentiality - Part I

De-identifying Field Data

In certain instances, simply randomizing records is not sufficient. Certain field values have inherent information regardless of the record that holds them. Phone numbers, social security number, patient numbers, and the like all contain information all by themselves that may breach confidentiality. HIPAA regulations in particular are very strict about displaying any data that could identify a patient.

So to further mask your data, it may be necessary to scramble character data with in the field itself. Now, it doesn't make sense to randomize some types of fields. Name and address fields in particular will look strange if randomized in this way.

















Not only does it look strange, the capitalization of the names makes it fairly easy to re-identify the field values. So this function needs to be used with some discretion. It is best used with character data that is composed of numbers.

A further complication is that sometimes character data uses an input mask that displays characters that may either be stored or not. For instance, when you see a social security number in a field 111-22-3333, you don't really know if the number was stored as displayed or stored as 111223333. If the dashes are stored, the number might be randomized as 2-33311-13. So any function that randomizes character data must take these complications into account.

Overview of the Process

This process is similar to the record randomization process, but it only works on character data. Instead of creating a temporary table that holds all the records, we'll create two temporary string variables, one to hold value of the field we want to randomize and another to hold the randomized value. Then we'll grab random characters from the Source string and append them one at time into the Target string. We'll start at the top record of the main table, randomize the character string, write it back into the record, then proceed through the entire table until all values of the field have been randomized.

Unlike the record randomization process, this process will work on fields with Unique Indexes and Primary Keys. In fact, if a field has a unique index, you should not remove it. It is entirely possible to create duplicate records at random, so I will include error trapping to identify duplicate records. If one is discovered, it will randomize the string again until it creates a non-duplicate field value.

While this process will work on a Primary Key field, you should be careful. If this field participates in a Relationship, you will lose relational integrity. That is, the values in the primary key will no longer match values in the foreign key of the related table. If you do this, you should make certain your relationships have Cascade Updates property set to Yes.

This is not an issue, however, if your Primary Key is an Autonumber field for two reasons. One, an Autonumber is a long integer rather than character data, and secondly, you cannot change the value of an Autonumber.

Randomizing Character Fields

Sub RandomizeCharacterField _

(TableName As String, _

FieldName As String, _

FieldFormat As Variant)

'***This subroutine randomizes the characters of the

' indicated field in the selected table. It requires

' a Reference to Microsoft DAO 3.6 Object Library

On Error GoTo Err_RandomizeCharacterField

Dim db As DAO.Database

Dim rsTableName As DAO.Recordset

Dim i As Integer, StrLen As Integer

Dim strSource As String

Dim strTarget As String

Dim LeftSource As String

'*** used with the RND() function to return a random number


'*** open the database

Set db = CurrentDb

'*** open the source table (tblRandom)

Set rsTableName = db.OpenRecordset(TableName)

'*** If the field format is Null, set it to the empty string

FieldFormat = Nz(FieldFormat, "")

'Loop thru the table, starting with the first record

Do Until rsTableName.EOF

'return label: used if a duplicate is produced


'*** set temp string to the field value

' and set receiving string to the empty string

strSource = rsTableName(FieldName)

strTarget = ""

'*** Repeat loop until all characters have been

' removed from temp string

Do While Len(strSource) > 0

     StrLen = Len(rsTableName(FieldName))

'*** select a character position at random

     If StrLen <> 1 Then

         i = Int((StrLen - 1) * Rnd + 1)

     End If

'*** grab the selected character and append

' it to the target string

     If InStr(FieldFormat, Mid(strSource, i, 1)) = 0 Then

          strTarget = strTarget & Mid(strSource, i, 1)

     End If

'*** delete the selected character from the Source String

     strSource = Left(strSource, i - 1) & _

          Mid(strSource, i + 1)


'*** when the Target String is complete, write it back to

'*** the original table


'*** apply the appropriate format to the string

rsTableName(FieldName) = Format(strTarget, FieldFormat)


'*** proceed to the next record



MsgBox "Field: " & FieldName & " from Table: " & _

     cboTable & " has been character scrambled."

'*** Clean up object variables



Set rsTableName = Nothing

Set db = Nothing


Exit Sub


If Err.Number = 3022 Then

'*** if there is a duplicate value in a field

' with a unique index, randomize it again

Resume ReRandomize


MsgBox Err.Description

MsgBox Err.Number

Resume Exit_RandomizeCharacterField

End If

End Sub

Calling the Routine

In the Character Randomizing routine above, I have a line of code that applies the appropriate format. However, I don't say how you'll know what the appropriate format is. That's because we'll decide this in the calling routine.

Whether we send a format string to the randomization routine depends on whether the field has an input mask. It also depends on what kind of input mask it has.

There are three possibilities for the InputMask property of a field: an input mask that saves the characters, an input mask that does not save display characters, and no input mask at all. However for our purposes, an input mask that does not save characters is the same as no input mask at all. So all we have to test for is an input mask that saves display characters.

An input mask has three sections. The first is the input mask itself. The second is tells the mask whether to save the display characters or not, and the third tells what kind of placeholder character the input mask will display. Semi-colons divide each section.

We are only concerned here with the second section. If the input mask saves the character, the second section holds a zero. A one or nothing indicates the mask will not save the display characters. Therefore, we can test the input mask for ";0;". If this string does not appear in the input mask, the display characters are not saved.

The InputMask property is one of those properties that don't exist for the object unless a value is assigned to it. So if a field does not have an input mask, trying to read the property will produce an error. Therefore, I will trap for that error (3270) to cover the case where there is no input mask.

Again, how you implement this code depends on how it is going to be used. As with the Record Randomizing routine, I chose to create a form with combo boxes on the form (cboTable and cboFieldName) to hold the table, field names, and one for the format. I also added a control that displays the input mask for identification purposes.

The following code uses such a set up and would be in the On Click event of the cmdRunCharRandomization command button of the form.

Private Sub cmdRunCharRandomization_Click()

'*** This code is the calling routine.

On Error GoTo Err_cmdRunCharRandomization_Click

Dim db As DAO.Database

Dim tdf As DAO.TableDef

Dim fld As DAO.Field

Dim Mask As String

Set db = CurrentDb

Set tdf = db.TableDefs(cboTable)

Set fld = tdf.Fields(cboFieldName1)

'*** Read the InputMask property of the field

Mask = fld.Properties("InputMask")

'*** if the InputMask of the field shows that extra

' characters are saved, call the randomization

' routine and include the FieldFormat string.

If InStr(Mask, ";0;") > 0 Then

     Call RandomizeCharacterField( _

         cboTable, cboFieldName1, cboFieldFormat)

    '*** but if the InputMask shows characters are not

     ' saved or if there is no InputMask, call the

     ' randomization routine without sending the

    ' FieldFormat string.


      Call RandomizeCharacterField( _

           cboTable, cboFieldName1, "")

End If


Exit Sub


'*** If there is no mask, it will return a

' Property Not Found error

If Err.Number = 3270 Then

Resume Next

ElseIf Err.Number = 3265 Then

MsgBox "Please select both a table and a field"

Resume Exit_cmdRunCharRandomization_Click


MsgBox Err.Description

MsgBox Err.Number

Resume Exit_cmdRunCharRandomization_Click

End If

End Sub


Figure 2: Example form that could be used to run the character randomization code.

Using a form like above allows you to select a field to scramble. You can select, in turn, all of fields with identifying data. On my website, you can find a small sample database called CharacterScramble.accdb, which demonstrates this process, including the calling form.

One caution. If you are randomizing fields to de-identify data to comply with regulatory rules, you should get approval for using this process with your regulatory compliance officer. For instance, HIPAA rules specify that if you de-identify data, you must use an algorithm which prevents the data from being reconstructed. Since this routine selects characters at random, I believe it complies with this instruction, but only your regulatory officer would know for sure.


There are many reasons for masking, blinding, scrambling, or de-identifying data within a database. Creating realistic data for demonstration purposes and masking data to comply with regulatory rules are just two. But with a little programming expertise, it doesn't have to be an onerous task.

Tuesday, December 5, 2017

De-identifying Data for Confidentiality - Part I

Data Scramble

There are a number of circumstances when you as a developer might need to randomize data within a database. You may need to create sample data to test or demonstrate a database application. Or you may need to mask or "de-identify" confidential data to comply with regulatory agency rules. This article demonstrates two methods that can be used, either separately or in conjunction, to scramble your database.

Creating test data for a database application is always a difficult task. You need to create enough records to adequately test database performance. A design that works well with a hundred records, might not work with ten thousand. You also need to create data varied enough to mimic real world situations, data that will test the limits of the business rules. Developers have a tendency to create data that will work with their application, not data that will break it.

The very best source for creating test data is actual customer data. Nothing mimics real-world data like the real-world data. Unfortunately, your customers may not appreciate their data being used in this way. Worse yet, there may be regulatory considerations. For instance, the health care industry has to comply with HIPAA (Health Insurance Portability and Accountability Act) regulations, which has very strict compliance rules.

Randomizing Records

Let's start by considering the simplest case. You simply want to mask customer data without the necessity of complying with regulations. It is probably sufficient to simply randomize each field that could identify a record to effectively mask the data.

Tables 1 and 2 show the before and after of a small Customer table that has been randomized.

















Table 1: Original Customer table

















Table 2: The Customer table after randomization

Of course, with only three records, the randomization is not very random, but the larger the result set, the better the randomization will be. So let's see how to do this.

Overview of the Process

The first thing we'll do is create a temporary table that holds the values we want to randomize. This is best done with a Make Table query (SELECT...INTO). Then we'll start at the top record of the main table and copy random values from the temporary table back into the main table. As each value is copied back in, we'll delete it from the temporary table. Repeat this process for each field and the records will be thoroughly scrambled.

One caveat: This process won't work if the field is a Primary Key or has a Unique Index because it will temporarily cause duplicate records. So, you'll have to manually remove the constraint. Of course it is possible to remove and re-establish constraints programmatically, but that is beyond the scope of this article. Lastly, this code will not work with an Autonumber field under any circumstances.

Creating Temporary Table

The first thing we have to do is create a temporary table, called "tblRandom" to hold the field values. (I could have used an array to do the same thing, but since this is a database project, I prefer to use database objects.) As I said, we can use a SQL statement in the form of a Make Table Query. The following subroutine shows how to do that.

Sub CreateRandomTable(TableName As String, FieldName As String)

'*** This routine creates the tblRandom table

' used to hold the values to be randomized

On Error GoTo Err_CreateRandomTable

Dim strSQL As String

'*** create the query based on the passed arguments

strSQL = "SELECT [" & FieldName & "] INTO tblRandom " & _

"FROM [" & TableName & "];"

'*** delete the table if it exists

CurrentDb.TableDefs.Delete "tblRandom"

'*** run SQL make-table query

CurrentDb.Execute strSQL


Exit Sub


'*** if the error is 3265, the table does not exist

If Err.Number = 3265 Then

'*** then skip the Delete command

Resume Next


MsgBox Err.Description

Resume Exit_CreateRandomTable

End If

End Sub

The error trapping code in the function is necessary because if the table "tblRandom" already exists, the Make Table query will fail. But the code traps for that error (3265) and skips the Delete command.

Randomizing Records

Now we are ready to randomize the fields. The following subroutine will do that.

Sub RandomizeTableField _

(TableName As String, _

FieldName As String)

'***This subroutine randomizes the indicated

' field in the selected table.

On Error GoTo Err_RandomizeTableField

Dim db As DAO.Database

Dim rsTarget As DAO.Recordset

Dim rsSource As DAO.Recordset

Dim i As Integer, upperbound As Integer

'*** used with the RND() function to return

' a random number


'*** open the database

Set db = CurrentDb

'*** open the source table (tblRandom)

Set rsSource = db.OpenRecordset("tblRandom", dbOpenTable)

'*** open target table (original table)

Set rsTarget = db.OpenRecordset(TableName, dbOpenTable)

'*** repeat loop until the all records have been randomized

Do Until rsTarget.EOF

upperbound = rsSource.RecordCount

'*** select a random record in the source

' table and move to it

If upperbound <> 1 Then

i = Int((upperbound - 1) * Rnd + 1)

rsSource.Move i

End If

'*** write value from source table into target


rsTarget(FieldName) = rsSource(0)


'*** delete that value from source table





MsgBox "Field: " & FieldName & " from Table: " & _

TableName & " has been scrambled." & vbCrLf & _

"If you removed an index, please recreate it."

'*** Clean up object variables




Set rsTarget = Nothing

Set rsSource = Nothing

Set db = Nothing

'*** error trapping to catch constraints


Exit Sub


'*** if the field has a unique index

If Err.Number = 3022 Then

'*** display message informing user to remove index

MsgBox "The field that you have chosen: " & _

cboFieldName1 & vbCrLf & _

"Has a Unique Index. You must remove this " & _

"index to proceed." & vbCrLf & _

"Once you have scrambled the field, " & _

"you can restore the index."

Resume Exit_RandomizeTableField


MsgBox Err.Description

Resume Exit_RandomizeTableField

End If

End Sub

Calling the Routine

Now, all that's left is to pull it all together with a calling routine. How you do that depends on how it is going to be used. For instance, this code could be placed in a form with combo boxes on the form (cboTable and cboFieldName) holding the table and field names. Then you could select the table and field from lists.

The following code uses such a set up and would be in the On Click event of the cmdRunRandomization command button of the form.

Private Sub cmdRunRandomization_Click()

'*** This code is the calling routine

On Error GoTo Err_cmdRunRandomization_Click

Call CreateRandomTable(cboTable, cboFieldName1)

Call RandomizeTableField(cboTable, cboFieldName1)

'*** delete temporary table

CurrentDb.TableDefs.Delete "tblRandom"


Exit Sub


If Err.Number = 3265 Then

'*** then skip the Delete command and

' resume on the next line

Resume Next


MsgBox Err.Description

Resume Exit_cmdRunRandomization_Click

End If

End Sub


Figure 1: Example form that could be used to run the field randomization code.

Using a form like above allows you to select a field to scramble. You can select, in turn, all of fields with identifying data. On my website, you can find a small sample database called Datascramble.accdb, which demonstrates this process, including the calling form.

Sometimes It’s Not Enough

But sometimes just re-arranging field values in the records is not enough. Sometimes, your fields contain sensitive data like Social Security Numbers or Patient Identifiers which can’t be displayed even if it’s attached to the wrong record.

In that case, you may want to scramble the characters within a field itself.  I address this problem in De-identifying Data for Confidentiality - Part II

Wednesday, November 22, 2017

Running Action Queries in VBA

One of the powerful features of Microsoft Access is its ability to run queries in Visual Basic for Applications (VBA) code. However, there are a couple of problems that plague developers when they attempt to do this. One is the problem of confirmation messages when running an Action query.

There are a variety of circumstances under which you might want to run a query in VBA code. You may want to just display the results of a Select query to the screen at the push of a button. You may want to run an Action Query in a code module as part of another process. You may even want to open a virtual recordset to do some data manipulation that can't be done directly in SQL. Access provides you several ways to accomplish this, depending on what you are trying to do.

There are two broad categories of queries in Access: Select queries and Action queries. Select queries simply return and display records. Action queries, on the other hand, actually modify the data in your tables. Append queries, Update queries, and Make Table queries are all action queries.

Confirmation Messages

The simplest way to run either a Select query or Action query is with the OpenQuery method of the DoCmd statement. Like this:

DoCmd.OpenQuery "Query1"

This statement will run the either type of query exactly as if you had run it from the Query Window. If the query is a Select query, it will simply return the query results to the screen, as in Figure 1.

Figure 1: Results of a Select query displayed to the screen using Docmd.OpenQuery.

However, since Action queries modify data in your tables, they don't return anything to the screen. So Access displays a couple of confirmation dialog boxes to warn you that you are about to change your data. For instance, running an Update query will make the confirmation dialog box in Figure 2 appear.

Figure 2: Dialog box asking for confirmation of the Update query.

Followed by a second dialog box confirming the action. Like Figure 3.

Figure 3: Second confirmation dialog box for Update Query

While these messages are generally a good thing when using the Access Graphical User Interface (GUI), if you running an action query as part of an automated process, these confirmation boxes can be annoying. There are several ways to turn off these messages.

Set Options

First of all, you could turn off all confirmation boxes for Action queries by going to Tools menu, click Options, then click the Edit/Find tab. Under Confirm, clear the Action queries check box. On the whole, however, this is generally not a good idea. This option will affect all action queries in all databases. It would be better to target just those queries that you are certain you want to run without confirmation.

Set Warnings

Another way to keep these messages from appearing is to turn off the warning messages programmatically. The SetWarnings method of the DoCmd statement will turn off the warning messages until you turn them on again. To do this, you surround the query you want to run with one command to turn them off and another to turn them back on.

DoCmd.SetWarnings False

DoCmd.OpenQuery "ActionQuery1"

DoCmd.SetWarnings True

It is important to use these statements in pairs because all dialog boxes will be turned off (even the ones you want) until the database is shut down and restarted.

The problem with this method is that in addition to turning off the confirmation boxes, it will also turn off all error messages. So if there is an error when running the query, it will fail silently, leaving you no indication it had failed. This is rarely optimal.

Execute Method

The best solution is to turn off the confirmation messages while allowing the real error messages to display.

The best way to run queries in code is through the Data Access Object model or DAO. DAO gives you programmatic access to the entire database. Through DAO, you can change the structure of the database, modify the data in the database, and create or modify the database objects like forms, reports and queries. You can also use DAO to execute action queries.

To use DAO, you must create and initialize a database object variable. Like this:

Dim db As DAO.Database

Set db = CurrentDb()

Once you have created a database variable, you can use the Execute method to run the action query.

db.Execute "Query2", dbFailOnError

The Execute method assumes you know what you're doing, so it does not display confirmation messages. The optional parameter, dbFailOnError, will display any error messages.

Technically, you wouldn't have to create and initialize a database variable to run this query. Access provides a shortcut.

CurrentDb.Execute "Query2", dbFailOnError

The CurrentDb object will give you direct access to the database. It creates a temporary instance of the database that persists only until that line is executed. It can only be used for that one command. However, there are times when you want the database object to persist because you want to do multiple things with it, which leads me to opening parameter queries in code.

Sample Database

You can find a companion sample which illustrates how to suppress confirmation messages when running an Action query in VBA code, and is a perfect companion to the Action Queries Series.

You can find the sample here:

Thursday, November 9, 2017

Bang Vs. Dot in Forms

In a previous post (Bang Vs. Dot In DAO), I wrote about the difference between the Dot (.) and the Bang (!) in DAO. It's pretty straight forward. Dot is used to separate one level of the DAO hierarchy, separating an object from its methods and properties. Bang is used to separate an object from the collection in which it is contained.

This is true as far as it goes, but two types of objects in Access, Forms and Reports, muddy the waters considerably. Because form and reports are classes, controls on them are members of both the Objects Collection and a property of the form or report itself.

You can verify this by creating a new form or report object and looking at the Object Browser in the Visual Basic Editor.

(While it works the same on reports, I'm going to concentrate on forms for the moment.)

Create a new form: Form2 with no controls or Record Source. Open the Visual Basic editor and push <F2> to open the Object Browser . To the right, you'll see a list of Classes and Members. These members represent the properties and methods and properties of the selected Class. Access creates a number of default methods and properties which I'll ignore for now.

Next, create a new table: Table1(Table1ID, Field1, Field2). (See Figure 1)


Figure 1: Table1

(Note: throughout this post, my form's name will be MyForm and the control is called ControlName -- it could be any control, a textbox, combobox, label, or whatever)

Make this table the RecordSource for Form2. Table1ID, Field1, and Field2 appear in the member list. This demonstrates that the fields in the record source behind the form are properties of the form. See Figure 2.


Figure 2: Table1ID in the members list of the Form2 class

Next I'll reference the fields behind the form. To reference an object on a form, you start with the Forms collection, followed by a Bang (!), followed by the form name. Like this Forms!Form2. This gives me a reference to the form itself.

Now, according to my definition above, following the form reference with a dot and the field name should work (because the fields are properties of the form) but the bang should not because I haven't created any controls yet. However, on testing, I find that both:


Return the value of Table1ID.

But even though they produce the same result, they aren't the same. It's really a case of two objects that mean different things but nevertheless almost always give the same result.  The bang (!) notation specifically denotes that what follows is a member of a collection; in this case, a member of the form object's default collection, the Controls collection. The dot (.) notation denotes that what follows is a property or method of the preceding object.  

ME Object

And then, just to muddy the waters even further, there's the "Me" object. The Me object is used in Visual Basic for Applications (VBA) to reference an instance of a class module. It is an implicitly declared variable and is available to every procedure within the class module and only within the class module.

Since Access Form and Reports Modules are classes, you can also use the Me object to refer to the Form or Report itself. This allows us to take a shorthand reference to object on a form. I'll address form referencing in a later post, but for now, I can reference a control on a form explicitly:


But as I said, the Me object muddies the water because Me.ControlName also works.

I know why Me!ControlName works. It is really just a short-hand way of referring to the default collection and property of the Form object.

The Controls collection is the default collection of the Form object, and Items is the default property of the Controls collection. An explicit reference to a control looks like either of these:

Me.Controls.Item(0) (assuming 0 is the correct index)

Since Item is the default property, you can also do these:


and since Controls is the default collection, you can do these:


So what about Me.ControlName?

This is the really cool part about forms -- when a form loads, it helps you out by adding every control on the form as a property of the form.  This is why


.. works.  You're asking for the "txtTextBox" property of Forms!MyForm -- which is a pointer to the control, in this case, the text box object.

Which should you use?

So, which is actually preferred? The answer is ... it depends.

Reasons to use Me Dot (Me.ControlName)

  1. Automatic Intellisense support.
  2. Runtime error if control is missing or mis-spelled.
  3. Slightly faster than Me Bang.

Reasons to use Me Bang (Me!ControlName)

  1. Me Bang ALWAYS works to reference the value of a control.
  2. If a control is named the same as a reserved word (i.e. "Name"), Me Bang will correctly reference the control.
  3. If the Record Source of a form is modified at run-time, Me Bang will continue to work.
  4. Intellisense can be initiated with <ctrl>+<space>.

Thursday, October 26, 2017

Bang Vs. Dot In DAO

You sometime hear that: "the bang (!) refers to user-defined things and dot (.) refers to Access-defined things." Although that is the standard rule of thumb, it is not exactly correct. More precisely, the bang (!) serves to separate an object from the collection which contains it, and the dot (.) serves to separate one level of the DAO hierarchy from another.

Let me back up.

DAO naming is hierarchical in nature, sort of like the DOS path. And like DOS, you can refer to an object using a fully qualified name or a semi-qualified name. In DOS, a fully qualified name would be like this:


If you are already in a folder, you can refer to the file by its semi-qualified name:

In the same way, you can refer to an Access object by its fully qualified name:

DBEngine.Workspaces(0).Databases! _

or if you assume the default DBEngine (which we almost always do), the default Workspace, and default Database, you can refer to the table by its semi-qualified name:


If you look at the fully qualified name like this:


you can see the DAO hierarchy levels more easily and how the dot separates them. (Much like the "\" in DOS.)

The dot also serves to separate an object from its properties and methods, which can also be thought of as another level in the hierarchy. So I can refer to "TableDefs!Table1.RecordCount". RecordCount being a property of Table1.

The bang (!) separates objects from the collections which hold them, thus it separates "Table1" from the collection "TableDefs" and the object "c:\msoffice\access97\test.mdb" from its collection "Databases".

Since most objects are defined by you, and since levels of DAO hierarchy are defined by Access, we get the rule of thumb named earlier.

DAO Naming Rules:

  1. The dot serves to separate one level of the DAO hierarchy from another in a fully qualified object name.
  2. The dot also serves to separate an object from its methods and properties.  (This, by the way, is the principle use for most people).
  3. The bang serves to separate an object from the collection in which it is contained.
In version DAO 3.0 (Access 2.0), you could use either the bang or the dot to separate an object from its collection. But in DAO 3.5 (Access 97), most objects didn't support the dot for this purpose. DAO 4.0 (Access 2000 and beyond), doesn't support it at all. You have to use the bang and dot properly or you will get a syntax error.

But this isn't really the end of the story.  When using class modules in Access (like Forms and Reports), the Bang and Dot behavior is slightly different.  To find out more, read my post: Bang Vs. Dot in Forms .

Wednesday, October 18, 2017

Really Bad Design Decisions: A Case Study

Sometimes a single design decision can have a cascade effect, which causes multiple, secondary design errors. One such error, commonly made by novice developers, is to slavishly follow a pre-existing paper form to determine their table design.

Now certainly, when creating a database to replace a paper-based system, it is vitally important to assemble all of the forms and reports used in the system. It is by a careful review of these documents that the developer determines a majority of the fields he or she will need in order to store and report the information in the system. After all, if an input form has a place for Order Date, then the database needs a field to store it. Likewise, if a report displays the Order Quantity, then this information has to be stored in the database somewhere.

But paper forms are not created with proper database design in mind. They are designed to make it easy for humans to fill out. This design can be at odds with established design principles. By blindly following the paper form to determine the database design, the developer can create a system subject to numerous data and relational integrity errors.

Case Study: OB Log

Several years ago, I ran into a database that was the poster child for this error. I was asked to create a database to automate a logbook for the Obstetrics department of a local hospital. Someone in the department, who had some experience with Access, had taken a first pass at creating a database.

Figure 1 shows the main data entry for the application.

Figure 1: Main entry form for the OB Log database

For the purposes of this article, we're going to ignore the numerous application design errors and concentrate on the database design errors because they're much more important and difficult to fix. Nevertheless, I'd be remiss if I didn't at least mention them. They are:

  1. Hideous use of color. With color, less is more.
  2. The controls are scattered about, making the user hunt for them.
  3. The controls are not exactly aligned, giving the form an amateur look.
  4. The labels for the controls are sunken. This confuses the user as to which as to which controls data can be entered in. In general, labels should be flat, text boxes should be sunken, and buttons should be raised. Any variation just confuses users.

But these application design issues pale in comparison to the database design problems.

Base Assumption: Paper Form Design Determines Database Design

All of the problems below stem from a single assumption: the design of the paper form is also the best design for the database. As we will see, this is not the case. Let's look at the problems resulting from this decision.

Problem 1: Single table design

Because all of the information here was on a single paper form, the developer incorrectly assumed it should all go in one table. The first thing I did when I looked at the database was to open the Relationship Window. It was blank. I knew at that point the design was in trouble. There were supplementary tables, a bewildering number, in fact, but they were just look-up tables that had no relationship to the main data table.

For instance, there was a look-up table called "Dilatation," which held the numbers 1 though 10, the acceptable values for the Dilatation field. There was a table called "Hours" which held the number 1 through 12, a table called "ZeroToThree," which held the numbers 0 through 3. There were also look-up tables for the doctors and nurses.

While many of these tables were in fact useful, there were no tables to hold information representing relationships in the data. In other words, the information that was the meat of the application was in a single table. It is rare when all the information of a complex process can be stored in a single table.

Problem 2: Multiple Yes/No fields

The reason complex processes can rarely be stored in a single table is because most of the time, the data contains One-To-Many relationships. For instance, each Delivery can have one or more Induction Indications, i.e. reasons why labor should be induced. On the paper form, these indications were represented like this:

Figure 2: Paper form layout for Induction Indicators

Each delivery event can have multiple reasons for inducing labor. But since you can't put multiple values in a single field, the developer chose to create 8 Yes/No fields, each representing one indication. On the form, they looked like this:

Figure 3: Portion of the application form implementing Induction Indications.

At first blush, this is a reasonable solution. You can easily query the table for one or more of these indications by testing for a Yes value in any of these fields. For instance:


However, other, more complex queries are difficult or impossible with this design. For instance, what if I wanted a count of the following Indications: Elective Induction, Macrosomia, and Preterm PROM with a result like this:

Figure 4: Complex query counting the number of Induction Indicators.

With the single-table design, I would have to create a complex Union query like this:

SELECT "Elective Induction" AS Induction Indications, Count([Elective Induction]) AS [Count Of InductionIndication]

FROM [OB DeliveryOld]

WHERE [Elective Induction]=True

GROUP BY "Elective Induction"


SELECT "Macrosomia" AS Induction Indications, Count([Macrosomia]) AS [Count Of Induction Indication]

FROM [OB DeliveryOld]

WHERE [Elective Induction]=True

GROUP BY "Macrosomia"


SELECT "Preterm PROM" AS Induction Indications, Count([Preterm PROM]) AS [Count Of Induction Indication]

FROM [OB DeliveryOld]

WHERE [Preterm PROM]=True

GROUP BY "Preterm PROM";

If I wanted a count of all the Indications, I would have 8 separate Select statements Unioned together, one for each Indication.

However, in a properly designed database, the query to do this would be as simple as:

SELECT InductionIndications, Count(II_ID) AS Count Of Induction Indications

FROM RefInductionIndications INNER JOIN tblInductionIndications ON

RefInductionIndications.II_ID = tblInductionIndications.II_ID

WHERE InductionIndications In ("ElectiveInduction","Macrosomia","Preterm PROM")

GROUP BY InductionIndications;

So what is the proper design? The key to understanding this is to realize that you can group the individual Indication fields as values for a single field in your table. These values can be stored in a separate lookup table called refInductionIndications. If a delivery could only have one of these indications, there would be a simple One-To-Many between the Delivery table (OBDelivery) and the reference table.

However, since each delivery can have one or more induction indication, there is actually a Many-To-Many relationship. Each Delivery can have one or more InductionIndications, and each InductionIndication can be part of one or more Deliveries.

To create this relationship, you create an intersection table (sometimes called a linking table). The only two fields in this table are the primary key fields of each of the other tables. These fields become foreign keys in relationships created with the other two tables. To make sure you don’t get duplicate records in the intersection table, you make the foreign key fields a compound primary key.

In the Relationship Window, the relationship looks like figure 5:

Figure5Figure 5: Correct design for storing multiple Induction Indications per Delivery.

Implementing this in the application can be accomplished with a simple subform whose record source is a query based on a join tblnductionIndications and refInductionIndications. In this way, the user can select as many Induction Indications as required. The subform would look something like figure 6:

Figure 6: Subform for Induction Indications for properly designed application

Another problem of having multiple Yes/No fields to represent grouped data falls under the heading of maintenance. What happens if a new Induction Indication must be added to the database?

With the single-table design, the table design itself would have to be modified, of course, but so would the form (a new check box would have to be added to an already cluttered form), and EVERY query and report using Induction Indications would have to be changed. This would require heavy, developer support.

On the other hand, with the proper database design (Figure 5), adding a new indication is as easy as adding a new value to the lookup table. The users can easily do it themselves through a maintenance form in the application, and it would require no modification of the application at all.

This problem was also repeated in the Vacuum/Forceps Indications and C-Section Indications sections. For each of these, the developer created multiple Yes/No fields when, in fact, a separate table was required.

Problem 3: Complex programming to Overcome Design Flaws

When a developer does not pay enough attention to the initial design, it often requires complex programming to overcome the flaws.

Problem 3a: Multiple Births

The design of the paper form also misled the original developer regarding birth information. He never asked the obvious question, what happens if there are multiple births for a single delivery, i.e. twins or triplets?

The original paper form had only one place for Birth information. When there were multiple births, hospital staff would simply use a second form, only fill out the birth portion, and staple it to the first form. This in itself should have told the developer that there was a One-To-Many relationship between the mother's information and birth information.

Because the developer was stuck with the single-table design, he was forced to create program code to: 1) duplicate all of the Delivery information into a new record, 2) delete the Birth information from the new record, and 3) allow the user to enter the new Birth information for each of the multiple births.

This design resulted in a huge amount of redundant data and opened the door for data integrity errors. If any of the Delivery information changed for a multiple-birth delivery, the users would have to edit multiple records, with the likelihood of data entry errors.

Of course, the correct design is to have a separate Birth table that is related to the Delivery table on the Primary Key field of the Delivery table (BirthTrackingID).

Figure 7: Correct design for modeling multiple Births per Delivery

The form could be modified to have a subform for holding the Births. In this way, the Delivery information would be held only once, yet the results for each individual birth could be accurately recorded while maintaining a link to the Delivery table.

Problem 3b: Date format

On the paper form, the Birth Date/Time was displayed in a particular format: i.e. "10:15 am Tues. 05/04/99". The developer believed he needed to input the date in that format. Therefore, he created the Date/Time field as a text field and created a complex process involving numerous Toolbars, Macros, and program code to assist the user in entering the data.

Clicking the Date/Time field produced Figure 8.

Figure 8: Series of Toolbars pop-ups when the Date/Time field was entered, which was supposed to assist the user to enter a date into a text field. A simple input mask for a date/time field would have been better.

Problem 4: Copying Paper Form Design for Application Form

This last problem is not so much a database design problem, but an application design problem. While it is I important to follow the flow of the paper form to make data entry as easy and error-free as possible, the developer should not feel constrained to slavishly imitate the layout of the paper form to produce the application form.

As the developer, you should work with the client to develop the data entry form to make it as easy as possible for the users. This may also mean a redesign of the paper form to facilitate entry into the database.

In my case, I redesigned the database form to look like this:

Figure 9: Redesigned Data Entry Form

With each group of related data in its own tab on the bottom. This led to a much cleaner design, which was easier for the data entry people to use. We also redesigned the paper form to facilitate this application form.

On-line Database Sample:

On my website, (, there is a sample database called "ReallyBadDatabase.mdb", which illustrates the problems discussed here, and another called "ReallyBadDatabaseReborn.mdb", which shows how I corrected them.


Does this mean the developer should completely ignore the layout of the paper forms? No. Assembling all of the input forms and reports is a vital part of the database development process. They will provide the majority of the fields necessary to the project. But you cannot let them determine your database design. As the developer, that's your job.

In general, when dealing with a paper input form do the following:

  1. Look for logical groupings within the fields on the form. For instance, PROM, Preterm PROM, PIH, and Macrosomia are all Induction Indications.
  2. Look for relationships in your groupings. For instance, each Delivery can have one or more Induction Indications. When one entity (Delivery) in your database can have multiple of another entity (Induction Indications), this tells you there needs to be a separate table for this entity.
  3. When there are a series of check boxes on a form, and they all represent alternatives of the same thing, you should not create multiple Yes/No fields, but instead create a single field where the acceptable values include all the labels of the check boxes. If more than one of these values can be selected, then you need an intersection or linking table to implement the Many-To-Many relationship.
  4. Regardless of the formatting on the form, all dates should be entered as Date/Time fields in the database. You can format the output with the Format function to meet the users needs.
  5. Check with the users of the system to make sure you have accurately modeled the data and relationships of the system. Also work with them to create an application form that is easy to use. You may also have to work with them to redesign the paper form.

The developer has to delicately balance the needs of the users against proper database design techniques. In this way, the developer creates a system that is not only easy for the user, but also ensures accurate data.

Friday, September 29, 2017

Normalizing City, State, and Zip

Recently, I ran into a question on the internet about normalization that I thought would be good to repeat.


 I'm toying with the idea of starting a new project, so I'm in brainstorming mode for table design. I'll be recording customer information in this application. Typical stuff: First and Last Names, Company, Street, Apt, City State and Zip, Phone numbers(s) and extensions, E-mail.

How do you guys recommend setting up the tables for City State and Zip? I was thinking that I would have:

StateAbbr (Limited to 2 letters)

FKStateID (Lookup to TBL__State)

FKCityID (Lookup to TBL__City

My customer information then would record only the zip code (PKZipID). And I could then use queries for the state, city, and zip information for forms, reports, etc.

Or is this beyond overkill?


 By strict normalization theory, having City, State, and Zip in the same table violates the 3rd Normal Form because there are functional dependencies between those fields. However, functional dependencies are not all the same. There are strong dependencies and weak dependencies.

A strong dependency is one in which the value of a dependent field MUST be changed if another field is changed. For instance, suppose I have Quantity, Price, and ExtendedPrice, where ExtendedPrice is a calculation of the other two. If I change either Quantity or Price, the ExtendedPrice MUST be changed.

A weak dependency is one in which the value of a dependent field MAY be changed if another field is changed. City, State, and Zip are examples of weak dependencies. If I change a person's city, I may not have to change their state. They may have moved within the same state. Likewise, if I change the state, I may not have to change the city. There is, after all, a Grand Rapids, Michigan and Grand Rapids, Minnesota. The relationship between city and zip is even more complicated.

Now, it is possible to represent these fields in a fully normalized fashion, but I contend that it is more trouble for very little gain. There are two main reasons for normalizing data: minimize redundant data and maximize data integrity. Both of these can be achieved by using lookup tables for City and State without trying to represent the relationship between the two. A zip code could be mis-typed, of course, but it could also be mis-selected from a list, so to my mind there's no real reason to have a lookup table.

If you did normalize these fields, you could have a selection process that would present all possible combinations of values if you selected the City. For instance, if you had a combo box for City, you could have cascading combo boxes to select only the appropriate States and Zip codes. But it would be just as easy to mis-select the right value from this list as it would be to mis-select from independent lookup tables. And, of course, you'd have to create and maintain these relationships.

Therefore, normalizing City, State, and Zip adds a complication to your data model for very little gain, and in my opinion, is a good example of when to denormalize.


Wednesday, September 20, 2017

The Normal Forms: In a Nutshell

In this series, I have tried to explain in non-mathematical terms what the first three Normal Forms mean and how they determine database design.

This is not the most useful method of learning normalization. In fact, many expert developers never learn the formal definition of the normal forms. If you haven't already, I suggest you read the following series:

ER Diagramming

However, I think it is useful to know what the Normal Forms are. Sometimes when you get stuck in a design, you can go back to the definitions to get yourself out of trouble.

So, in summary:

First Normal Form (1NF) says that each record must be unique, that is, it has a primary key. There are some additional restrictions on how such uniqueness is maintained such as not allowing positional referencing and no repeated columns.

Second Normal Form (2NF) says that each field in the record must depend on the whole primary key, not just a part of it.

Third Normal Form (3NF) says that no field must depend on any other field except the primary key.

William Kent, author of A Simple Guide to Five Normal Forms in Relational Database Theory, once abbreviated the first three normal forms like this:

"The Key, the whole Key, and nothing but the Key, so help me Codd."

Wednesday, September 13, 2017

The Normal Forms: Third Normal Form

Last time, in The Normal Forms: Second Normal Form, I discussed how to remove redundant data by identifying fields which are not functionally dependant on the entire primary key. Figure 1 shows the results.

Figure 1: Order table Decomposed into Orders and Order Details

This corrected some data anomaly errors in my data, however, data anomalies are still possible under 2NF. To prevent these anomalies, I need an additional rule: Third Normal Form (3NF).


A table is said to be in Third Normal Form (3NF) if:

  1. It is in Second Normal Form and
  2. If all non-key fields are mutually independent, that is, all fields are functionally dependant ONLY on the primary key field(s).


There are two main sources of data anomalies that 3NF corrects are 1) Redundant Data and 2) Calculated fields.

Redundant Data

Although I removed some of the redundant data when I split the Order table into Orders and OrderDetails, there is still some redundancy left, namely ProductNum and Item. Both of these fields are dependant on the entire primary key, so they comply with 2NF. However The ProductNum and Item fields are mutually dependant, that is, they depend upon each other. The product number determines the item description and the item description determines the product number.

Just as we saw in 2NF, redundancy can lead to inconsistent data being entered into the database or correct information being changed after the fact. Figure 2 shows some data anomalies possible under 2NF as a result of redundant data.

Figure 2: 2NF Data Anomalies Due to Redundant Data

Product A7S has two different items associated with it: either a wrench or a nail. Which is it?

Also, two product numbers (B7G and B7H) are associated with an Item called "saw". Is this the same saw or not?

Calculated Values

Mutual dependency is also an issue with storing calculated values. Suppose I had a Quantity and Price field and I decided to calculate the ExtendedPrice by multiplying the them. This is a common database error made by novices.

The problem is one of dependency. The Extended Price calculation depends on the Quantity and Price fields for its value. 3NF says that no field should depend on any field except those making up the primary key.

If I store that calculation and later go back and change one of the dependant fields (either the Quantity or the Price), my calculation will be incorrect. Figure 3 shows some calculated values anomalies.

Figure 3: Anomalies with Calculated Values

First of all, if the user is manually calculating and typing in the value of the Extended Price, the value could be anything, even a calculation from a different row. So let's assume I have an automated process, a formula in a form which calculates the value.

The problem is that you must depend on programming to maintain your data integrity, not the database itself. If the integrity is maintained at the database level, it cannot be subverted.

In the case of the table above, the first anomalous record was caused by changing the Quantity from 1 to 2 after the fact. But because I didn't have a process to re-calculate the value if Quantity changed, the Extended Price is now wrong.

In the second case, an Update Query was used to raise the price of nails was raised from $0.09 to $0.10. Unfortunately, the query did not include a new calculation, so all Extended Price calculations for nails are now wrong.


The problem of calculated values is a simple one to solve. Don't. As a general rule, I just don't store calculations. There are minor exceptions, but in most cases, I'll be safe by just leaving them out. When I need these values, I'll calculate them as output in either a query, form, or report.

As with 2NF, the solution to redundant data is to remove it to a separate table, leaving one field to join back to the original. In this case, the ProductNum, Item, and Price fields will go into the Products table. I'll leave ProductNum in the Order Detail table to maintain the relationship. Figure 4 is the result.

Figure 4: Decomposing Order Details to Remove Redundant Data

So now I've removed as much redundant data as possible. There's still a little left. There always will be in order to maintain the relationships between tables. But none of the redundancy will result in data anomalies, so I can say with confidence that my tables are now normalized to Third Normal Form. Figure 5 shows the final design.

Figure 5: Final Design

In my next and final post: The Normal Forms: In A Nutshell, I'll wrap it all up.


Monday, August 28, 2017

The Normal Forms: Second Normal Form

Last time, in The Normal Forms: First Normal Form, I discussed the rules for the basic arrangement of data in a table. If you don't follow those rules, called the First Normal Form (1NF), you don't even have a table. But even if a table is normalized to 1NF, that doesn't mean it's perfect. Figure 1 shows a table normalized to 1NF.

Figure 1: Order Table - 1NF

The problem here is the danger of introducing errors, called data anomalies, into the table. Data anomalies can be introduced by operator error or through programming. Once you have a single data anomaly in your table, all of your data is suspect, so the remaining normal forms work to remove such data anomalies. Figure 2 shows the same table with data anomalies present.

Figure 2: Order Table with Data Anomalies Present

As you can see, Order 112 has two different customer numbers (444 and 445), which is correct? It is impossible to tell. In addition, both product numbers B7G and B7H are identified as a 'saw'. Are these the same product with different product numbers or different products with the same description? Again, I can't know based on the data in the database.

The root cause of these data anomalies is redundant data, that is, data that is repeated in multiple rows. So we need to minimize this redundant data as much as possible.

Now wait a second! Didn't I just say in the last post that I HAD to repeat the values? Yes I did. But that was to comply with 1NF, which is not the end of the story.


So let's look at the definition of Second Normal Form (2NF). A table is said to be in 2NF if:

  1. It is in 1NF.
  2. Every field is functionally dependant on the entire primary key, that is, it depends on the entire primary key for its value.

Functional Dependency

Before I can continue, I have to talk a bit about functional dependencies, because all of the remaining normal forms rely on this concept. Functional dependency speaks to the relationship that fields in a table have to each other. It is perhaps best explained by example.

Suppose there is an Employee table, and I am an entity in that table. There is a row that represents all the information about Roger Carlson with Social Security Number (SSN) acting as the primary key. Since all the fields in my row are information about me, and I am represented by the SSN, we can say that each field depends on SSN for its value. Another way to say it is that SSN implies the value of each of the other fields in my record.

If a different row is selected, with a different SSN, the values of all the other fields will change to represent that entity.


Second Normal Form says that all of the fields in a table must depend on the ENTIRE primary key. When there is a single primary key (like SSN), it is pretty simple. Each field must be a fact about the record. But when there is a compound primary key, it's possible that some fields may depend on just part of the primary key.

Going back to our Order Table example, Figure 3 shows these partial dependencies.

Figure 3: 1NF Orders Table showing dependencies

In order to uniquely identify the record, the primary key of this table is a combination of OrderNum and ProductNum (or Item, but a number is a better choice).

2NF says that each field must depend on the ENTIRE primary key. This is true for some fields: Quantity depends on both the OrderNum and ProductNum, so does Item. However, some fields do not.

Order 112 will be for customer 444 regardless of the product selected. The order date does not change when the product number changes either. These fields depend ONLY on the OrderNum field.

Since some fields do not depend on the entire primary key, it is not in Second Normal Form. So what do I do about it?


The solution is to remove those records, which do not depend on the entire primary key, to a separate table where they do. In the process, I remove the redundant or repeated data so there is just a single record for each. Figure 4 shows the process of decomposing the table into two tables.

Figure 4: 1NF Orders table Decomposed to 2NF

This corrects the data anomaly with the two different customers for the same order. However, I still have the problem of the product number and the item description. It's still possible for the same product to have different descriptions or different items sharing the same ProductNum, as Figure 5 illustrates.

Figure 5: Remaining data anomalies.

Product A7S is either a wrench or a nail, and a saw is either product number B7G or B7H.

To correct these problems, I need to add yet another normal form: Third Normal Form. I'll talk about that next.


Wednesday, August 16, 2017

The Normal Forms: First Normal Form (1NF)

In Normal Forms: Introduction, I introduced the topic of the Normal Forms, the theory behind the arrangement of fields into tables. Since it is usually best to start at the beginning, I'll begin with the First Normal Form.

The First Normal Form, or 1NF, is the very lowest, basic arrangement of fields in a table. If your table is not in 1NF, then it isn't really a table. Sadly, many novice databases are not even in 1NF.

A table is said to be in First Normal Form if:
1) there is no row or column order
2) each row (record) is unique
3) each row by column value (field) contains exactly one value
4) there are no repeating columns

What does this mean?

First of all, the operation of the table will be unaffected by the order the rows are in or the order the fields are within the row. It means that each record must be complete unto itself without referencing another row positionally, for example, the row above. Likewise the position of the fields is irrelevant.

Since each record is unique, it means there are no duplicate records. This uniqueness is defined by a field or combination of fields whose value will never be duplicated. This is called the primary key. In order to assure uniqueness, no part of a primary key may be NULL.

Because a field must have a single value, it cannot contain a list or compound value. One over looked consequence of this rule is that each field MUST have at least one value. If the value of the field is not known, it is said to be NULL. (There is some debate over whether NULL is actually a value. I maintain it is, but the discussion is largely semantic.)

Lastly, there are no repeating columns. Repeating columns are columns that store essentially the same information. They may be columns like Product1, Product2, Product3; or multiple Yes/No columns that represent the same information like each product having its own column (Saw, Hammer, Nails).


Let's take a look at how these rules are implemented and what they mean for table design.

Suppose I want a simple Order table with OrderNum, CustomerNum, OrderDate, Quantity, Item, and ProductNum. Although the definition of 1NF is fairly simple, it precludes a wide range of data arrangements. Let's take a look at some of these arrangements.

Figure 1 shows one way such data can be arranged.

Figure 1: Records with Missing Values

To make each record unique, the primary key would have to be OrderNum and Item. However, since no part of the primary key may be Null, this arrangement won't work. All the values of the primary key must be filled in.

But even more than this, the record is not "complete" unto itself. That is, it refers to other records for information. It's not that the values of OrderNum, CustomerNum, or OrderDate are unknown and therefore NULL. I do know the value, but I'm attempting to represent that data positionally. This, of course, violates the first rule (order is irrelevant) and rule 3 (each field must have a value).

This arrangement is common in spreadsheets and reports, but it is not sufficient for storing data.
Figure 2 shows another way the data can be arranged.

Figure 2: Information Stored In Lists

This violates rule 3. Each field must hold one and only one piece of information and not a list. It would be a nightmare to do anything with the date in the Item field other than simply display it because the database management system is designed to treat fields as indivisible.

While Figure 2 is an extreme example that mixes multiple fields in addition to multiple field values, believe it or not, I have also seen database designed like Figure 3:

Figure 3: Data stored in multiple lists

While this is better than Figure 2 (at least it does not mix fields), it is still not atomic and you'd have difficultly associating a quantity with a particular product.

Compound Values:
1NF also precludes compound values, things like full names in a single field or multi-part identification numbers.

Full Names

Why not store a full name? Roger J. Carlson is certainly pertinent information about me. However, it is not indivisible. It is made up of a first name, middle initial, and last name. Because I may want to access just pieces of it (using the first name in the salutation of a letter or sorting by last name), the information should be stored in separate fields.

Multi-part Numbers

Often, a database requirement is to have an identification number that is composed of different, meaningful parts. A serial number may have a four-digit product code, followed by the manufacture date (8 digits), and ended with the facility ID. It might look like this COMP02222008BMH. While this may be a useful arrangement for humans, it is useless in a database. Each value should be stored in a separate field. When the serial number is needed, it can be concatenated easily enough in a query, form, or report.

Figure 4 shows data that is stored in repeated columns.

Figure 4: Data Stored in Repeated Columns

This arrangement is common for people who use spreadsheets a lot. In fact, this is so common it is called "committing spreadsheet". The problem, in addition to having multiple columns, is that in order to associate a quantity with a product, you would have to do it positionally, breaking rule 1.

Lastly, another version of the Repeated Columns error is multiple Yes/No columns. Figure 5 illustrates that.

Figure 5: Data Stored in Yes/No Columns

At first blush, this does not seem to have the same problem, but all I've done is replace generic field names (Product1, Product2, etc) with specific ones (wench, saw, etc). It would be extremely easy to check a second field in any row and they you would have no idea which was correct.


As we've seen, First Normal Form precludes a lot of possible data arrangements. So what's left? There's really only one possibility left. Figure 6 shows it.

Figure 6: 1NF Correct with Repeated Rows

Each row has a unique identifier and there are no duplicates. Each field contains a single value. The position of the row and field is irrelevant, and lastly there are no repeating columns.

It's perfect. Right? Well, no. While this table does conform to 1NF, it is still has some problems; problems that 1NF is not equipped to handle. For those, I need to look at the Second Normal Form (2NF), which is what I'll do next time.


Tuesday, August 1, 2017

The Normal Forms: Introduction

Normalization is a methodology for minimizing redundancy in a database without losing information. It is the theory behind the arrangement of attributes into relations. The rules which govern these arrangements are called Normal Forms.

In What Is Normalization: Parts I, II, III, IV and V, I discussed the decomposition method of normalization, where you put all your fields into a single table and break the them down into smaller, normalized tables.

In Entity-Relationship Diagramming: Parts I, II, III, and IV, I discussed an alternate method which works from the bottom up. It takes the individual pieces of information (Attributes) and group them into logical groupings (Entities).

However, in neither case did I formally define or explain the Normal Forms. And that's for good reason. I find that only after people get a working understanding of normalization do they really understand the Normal Forms and what they imply. Therefore I usually leave them until last. If you haven't read the above mentioned serie, it would be worth your while to do so.

Normalization was first developed by E. F. Codd, the father of Relational Database theory. He created a series of "Normal Forms", which mathematically defined the rules for normalization. Each normal form is "stronger" than the previous, that is, they build upon the previous normal forms. Second Normal Form (2NF) encompasses all the rules of First Normal Form (1NF) plus adding its own rules. Third Normal Form encompasses all of 1NF and 2NF, and so on.

Figure 1: Each Normal Form Encompasses the Previous

In order, the normal forms are: First Normal Form (1NF), Second Normal Form (2NF), Third Normal Form (3NF), Boyce-Codd Normal Form (BCNF), Fourth Normal Form (BCNF), Fifth Normal Form (5NF), and Domain Key Normal Form (DKNF). BCNF comes between 3NF and 4NF because it was developed later, but because of its "strength" belonged between 3NF and 4NF.

Since each normal form encompasses all previous forms, in theory, the higher the normal form, the "better" the database.

In practice, however, normalizing to the first three normals form will avoid the vast majority of database design problems. So it is generally agreed that to be properly normalized, most databases must be in 3NF.

Beyond 3NF, the normal forms become increasingly specialized. Boyce-Codd Normal form and Fourth Normal Form were created to handle special situations. Fifth Normal Form and Domain-Key Normal Form are largely of theoretical intererst and little used in practical design.

So what I'm going to do for this series is limit myself to the first three normal forms, giving their definitions, implications for data, and how to implement them.

In my next post, I'll start with the First Normal Form.