Thursday, November 9, 2017

Bang Vs. Dot in Forms

In a previous post (Bang Vs. Dot In DAO), I wrote about the difference between the Dot (.) and the Bang (!) in DAO. It's pretty straight forward. Dot is used to separate one level of the DAO hierarchy, separating an object from its methods and properties. Bang is used to separate an object from the collection in which it is contained.

This is true as far as it goes, but two types of objects in Access, Forms and Reports, muddy the waters considerably. Because form and reports are classes, controls on them are members of both the Objects Collection and a property of the form or report itself.

You can verify this by creating a new form or report object and looking at the Object Browser in the Visual Basic Editor.

(While it works the same on reports, I'm going to concentrate on forms for the moment.)

Create a new form: Form2 with no controls or Record Source. Open the Visual Basic editor and push <F2> to open the Object Browser . To the right, you'll see a list of Classes and Members. These members represent the properties and methods and properties of the selected Class. Access creates a number of default methods and properties which I'll ignore for now.

Next, create a new table: Table1(Table1ID, Field1, Field2). (See Figure 1)


Figure 1: Table1

(Note: throughout this post, my form's name will be MyForm and the control is called ControlName -- it could be any control, a textbox, combobox, label, or whatever)

Make this table the RecordSource for Form2. Table1ID, Field1, and Field2 appear in the member list. This demonstrates that the fields in the record source behind the form are properties of the form. See Figure 2.


Figure 2: Table1ID in the members list of the Form2 class

Next I'll reference the fields behind the form. To reference an object on a form, you start with the Forms collection, followed by a Bang (!), followed by the form name. Like this Forms!Form2. This gives me a reference to the form itself.

Now, according to my definition above, following the form reference with a dot and the field name should work (because the fields are properties of the form) but the bang should not because I haven't created any controls yet. However, on testing, I find that both:


Return the value of Table1ID.

But even though they produce the same result, they aren't the same. It's really a case of two objects that mean different things but nevertheless almost always give the same result.  The bang (!) notation specifically denotes that what follows is a member of a collection; in this case, a member of the form object's default collection, the Controls collection. The dot (.) notation denotes that what follows is a property or method of the preceding object.  

ME Object

And then, just to muddy the waters even further, there's the "Me" object. The Me object is used in Visual Basic for Applications (VBA) to reference an instance of a class module. It is an implicitly declared variable and is available to every procedure within the class module and only within the class module.

Since Access Form and Reports Modules are classes, you can also use the Me object to refer to the Form or Report itself. This allows us to take a shorthand reference to object on a form. I'll address form referencing in a later post, but for now, I can reference a control on a form explicitly:


But as I said, the Me object muddies the water because Me.ControlName also works.

I know why Me!ControlName works. It is really just a short-hand way of referring to the default collection and property of the Form object.

The Controls collection is the default collection of the Form object, and Items is the default property of the Controls collection. An explicit reference to a control looks like either of these:

Me.Controls.Item(0) (assuming 0 is the correct index)

Since Item is the default property, you can also do these:


and since Controls is the default collection, you can do these:


So what about Me.ControlName?

This is the really cool part about forms -- when a form loads, it helps you out by adding every control on the form as a property of the form.  This is why


.. works.  You're asking for the "txtTextBox" property of Forms!MyForm -- which is a pointer to the control, in this case, the text box object.

Which should you use?

So, which is actually preferred? The answer is ... it depends.

Reasons to use Me Dot (Me.ControlName)

  1. Automatic Intellisense support.
  2. Runtime error if control is missing or mis-spelled.
  3. Slightly faster than Me Bang.

Reasons to use Me Bang (Me!ControlName)

  1. Me Bang ALWAYS works to reference the value of a control.
  2. If a control is named the same as a reserved word (i.e. "Name"), Me Bang will correctly reference the control.
  3. If the Record Source of a form is modified at run-time, Me Bang will continue to work.
  4. Intellisense can be initiated with <ctrl>+<space>.

Thursday, October 26, 2017

Bang Vs. Dot In DAO

You sometime hear that: "the bang (!) refers to user-defined things and dot (.) refers to Access-defined things." Although that is the standard rule of thumb, it is not exactly correct. More precisely, the bang (!) serves to separate an object from the collection which contains it, and the dot (.) serves to separate one level of the DAO hierarchy from another.

Let me back up.

DAO naming is hierarchical in nature, sort of like the DOS path. And like DOS, you can refer to an object using a fully qualified name or a semi-qualified name. In DOS, a fully qualified name would be like this:


If you are already in a folder, you can refer to the file by its semi-qualified name:

In the same way, you can refer to an Access object by its fully qualified name:

DBEngine.Workspaces(0).Databases! _

or if you assume the default DBEngine (which we almost always do), the default Workspace, and default Database, you can refer to the table by its semi-qualified name:


If you look at the fully qualified name like this:


you can see the DAO hierarchy levels more easily and how the dot separates them. (Much like the "\" in DOS.)

The dot also serves to separate an object from its properties and methods, which can also be thought of as another level in the hierarchy. So I can refer to "TableDefs!Table1.RecordCount". RecordCount being a property of Table1.

The bang (!) separates objects from the collections which hold them, thus it separates "Table1" from the collection "TableDefs" and the object "c:\msoffice\access97\test.mdb" from its collection "Databases".

Since most objects are defined by you, and since levels of DAO hierarchy are defined by Access, we get the rule of thumb named earlier.

DAO Naming Rules:

  1. The dot serves to separate one level of the DAO hierarchy from another in a fully qualified object name.
  2. The dot also serves to separate an object from its methods and properties.  (This, by the way, is the principle use for most people).
  3. The bang serves to separate an object from the collection in which it is contained.
In version DAO 3.0 (Access 2.0), you could use either the bang or the dot to separate an object from its collection. But in DAO 3.5 (Access 97), most objects didn't support the dot for this purpose. DAO 4.0 (Access 2000 and beyond), doesn't support it at all. You have to use the bang and dot properly or you will get a syntax error.

But this isn't really the end of the story.  When using class modules in Access (like Forms and Reports), the Bang and Dot behavior is slightly different.  To find out more, read my post: Bang Vs. Dot in Forms .

Wednesday, October 18, 2017

Really Bad Design Decisions: A Case Study

Sometimes a single design decision can have a cascade effect, which causes multiple, secondary design errors. One such error, commonly made by novice developers, is to slavishly follow a pre-existing paper form to determine their table design.

Now certainly, when creating a database to replace a paper-based system, it is vitally important to assemble all of the forms and reports used in the system. It is by a careful review of these documents that the developer determines a majority of the fields he or she will need in order to store and report the information in the system. After all, if an input form has a place for Order Date, then the database needs a field to store it. Likewise, if a report displays the Order Quantity, then this information has to be stored in the database somewhere.

But paper forms are not created with proper database design in mind. They are designed to make it easy for humans to fill out. This design can be at odds with established design principles. By blindly following the paper form to determine the database design, the developer can create a system subject to numerous data and relational integrity errors.

Case Study: OB Log

Several years ago, I ran into a database that was the poster child for this error. I was asked to create a database to automate a logbook for the Obstetrics department of a local hospital. Someone in the department, who had some experience with Access, had taken a first pass at creating a database.

Figure 1 shows the main data entry for the application.

Figure 1: Main entry form for the OB Log database

For the purposes of this article, we're going to ignore the numerous application design errors and concentrate on the database design errors because they're much more important and difficult to fix. Nevertheless, I'd be remiss if I didn't at least mention them. They are:

  1. Hideous use of color. With color, less is more.
  2. The controls are scattered about, making the user hunt for them.
  3. The controls are not exactly aligned, giving the form an amateur look.
  4. The labels for the controls are sunken. This confuses the user as to which as to which controls data can be entered in. In general, labels should be flat, text boxes should be sunken, and buttons should be raised. Any variation just confuses users.

But these application design issues pale in comparison to the database design problems.

Base Assumption: Paper Form Design Determines Database Design

All of the problems below stem from a single assumption: the design of the paper form is also the best design for the database. As we will see, this is not the case. Let's look at the problems resulting from this decision.

Problem 1: Single table design

Because all of the information here was on a single paper form, the developer incorrectly assumed it should all go in one table. The first thing I did when I looked at the database was to open the Relationship Window. It was blank. I knew at that point the design was in trouble. There were supplementary tables, a bewildering number, in fact, but they were just look-up tables that had no relationship to the main data table.

For instance, there was a look-up table called "Dilatation," which held the numbers 1 though 10, the acceptable values for the Dilatation field. There was a table called "Hours" which held the number 1 through 12, a table called "ZeroToThree," which held the numbers 0 through 3. There were also look-up tables for the doctors and nurses.

While many of these tables were in fact useful, there were no tables to hold information representing relationships in the data. In other words, the information that was the meat of the application was in a single table. It is rare when all the information of a complex process can be stored in a single table.

Problem 2: Multiple Yes/No fields

The reason complex processes can rarely be stored in a single table is because most of the time, the data contains One-To-Many relationships. For instance, each Delivery can have one or more Induction Indications, i.e. reasons why labor should be induced. On the paper form, these indications were represented like this:

Figure 2: Paper form layout for Induction Indicators

Each delivery event can have multiple reasons for inducing labor. But since you can't put multiple values in a single field, the developer chose to create 8 Yes/No fields, each representing one indication. On the form, they looked like this:

Figure 3: Portion of the application form implementing Induction Indications.

At first blush, this is a reasonable solution. You can easily query the table for one or more of these indications by testing for a Yes value in any of these fields. For instance:


However, other, more complex queries are difficult or impossible with this design. For instance, what if I wanted a count of the following Indications: Elective Induction, Macrosomia, and Preterm PROM with a result like this:

Figure 4: Complex query counting the number of Induction Indicators.

With the single-table design, I would have to create a complex Union query like this:

SELECT "Elective Induction" AS Induction Indications, Count([Elective Induction]) AS [Count Of InductionIndication]

FROM [OB DeliveryOld]

WHERE [Elective Induction]=True

GROUP BY "Elective Induction"


SELECT "Macrosomia" AS Induction Indications, Count([Macrosomia]) AS [Count Of Induction Indication]

FROM [OB DeliveryOld]

WHERE [Elective Induction]=True

GROUP BY "Macrosomia"


SELECT "Preterm PROM" AS Induction Indications, Count([Preterm PROM]) AS [Count Of Induction Indication]

FROM [OB DeliveryOld]

WHERE [Preterm PROM]=True

GROUP BY "Preterm PROM";

If I wanted a count of all the Indications, I would have 8 separate Select statements Unioned together, one for each Indication.

However, in a properly designed database, the query to do this would be as simple as:

SELECT InductionIndications, Count(II_ID) AS Count Of Induction Indications

FROM RefInductionIndications INNER JOIN tblInductionIndications ON

RefInductionIndications.II_ID = tblInductionIndications.II_ID

WHERE InductionIndications In ("ElectiveInduction","Macrosomia","Preterm PROM")

GROUP BY InductionIndications;

So what is the proper design? The key to understanding this is to realize that you can group the individual Indication fields as values for a single field in your table. These values can be stored in a separate lookup table called refInductionIndications. If a delivery could only have one of these indications, there would be a simple One-To-Many between the Delivery table (OBDelivery) and the reference table.

However, since each delivery can have one or more induction indication, there is actually a Many-To-Many relationship. Each Delivery can have one or more InductionIndications, and each InductionIndication can be part of one or more Deliveries.

To create this relationship, you create an intersection table (sometimes called a linking table). The only two fields in this table are the primary key fields of each of the other tables. These fields become foreign keys in relationships created with the other two tables. To make sure you don’t get duplicate records in the intersection table, you make the foreign key fields a compound primary key.

In the Relationship Window, the relationship looks like figure 5:

Figure5Figure 5: Correct design for storing multiple Induction Indications per Delivery.

Implementing this in the application can be accomplished with a simple subform whose record source is a query based on a join tblnductionIndications and refInductionIndications. In this way, the user can select as many Induction Indications as required. The subform would look something like figure 6:

Figure 6: Subform for Induction Indications for properly designed application

Another problem of having multiple Yes/No fields to represent grouped data falls under the heading of maintenance. What happens if a new Induction Indication must be added to the database?

With the single-table design, the table design itself would have to be modified, of course, but so would the form (a new check box would have to be added to an already cluttered form), and EVERY query and report using Induction Indications would have to be changed. This would require heavy, developer support.

On the other hand, with the proper database design (Figure 5), adding a new indication is as easy as adding a new value to the lookup table. The users can easily do it themselves through a maintenance form in the application, and it would require no modification of the application at all.

This problem was also repeated in the Vacuum/Forceps Indications and C-Section Indications sections. For each of these, the developer created multiple Yes/No fields when, in fact, a separate table was required.

Problem 3: Complex programming to Overcome Design Flaws

When a developer does not pay enough attention to the initial design, it often requires complex programming to overcome the flaws.

Problem 3a: Multiple Births

The design of the paper form also misled the original developer regarding birth information. He never asked the obvious question, what happens if there are multiple births for a single delivery, i.e. twins or triplets?

The original paper form had only one place for Birth information. When there were multiple births, hospital staff would simply use a second form, only fill out the birth portion, and staple it to the first form. This in itself should have told the developer that there was a One-To-Many relationship between the mother's information and birth information.

Because the developer was stuck with the single-table design, he was forced to create program code to: 1) duplicate all of the Delivery information into a new record, 2) delete the Birth information from the new record, and 3) allow the user to enter the new Birth information for each of the multiple births.

This design resulted in a huge amount of redundant data and opened the door for data integrity errors. If any of the Delivery information changed for a multiple-birth delivery, the users would have to edit multiple records, with the likelihood of data entry errors.

Of course, the correct design is to have a separate Birth table that is related to the Delivery table on the Primary Key field of the Delivery table (BirthTrackingID).

Figure 7: Correct design for modeling multiple Births per Delivery

The form could be modified to have a subform for holding the Births. In this way, the Delivery information would be held only once, yet the results for each individual birth could be accurately recorded while maintaining a link to the Delivery table.

Problem 3b: Date format

On the paper form, the Birth Date/Time was displayed in a particular format: i.e. "10:15 am Tues. 05/04/99". The developer believed he needed to input the date in that format. Therefore, he created the Date/Time field as a text field and created a complex process involving numerous Toolbars, Macros, and program code to assist the user in entering the data.

Clicking the Date/Time field produced Figure 8.

Figure 8: Series of Toolbars pop-ups when the Date/Time field was entered, which was supposed to assist the user to enter a date into a text field. A simple input mask for a date/time field would have been better.

Problem 4: Copying Paper Form Design for Application Form

This last problem is not so much a database design problem, but an application design problem. While it is I important to follow the flow of the paper form to make data entry as easy and error-free as possible, the developer should not feel constrained to slavishly imitate the layout of the paper form to produce the application form.

As the developer, you should work with the client to develop the data entry form to make it as easy as possible for the users. This may also mean a redesign of the paper form to facilitate entry into the database.

In my case, I redesigned the database form to look like this:

Figure 9: Redesigned Data Entry Form

With each group of related data in its own tab on the bottom. This led to a much cleaner design, which was easier for the data entry people to use. We also redesigned the paper form to facilitate this application form.

On-line Database Sample:

On my website, (, there is a sample database called "ReallyBadDatabase.mdb", which illustrates the problems discussed here, and another called "ReallyBadDatabaseReborn.mdb", which shows how I corrected them.


Does this mean the developer should completely ignore the layout of the paper forms? No. Assembling all of the input forms and reports is a vital part of the database development process. They will provide the majority of the fields necessary to the project. But you cannot let them determine your database design. As the developer, that's your job.

In general, when dealing with a paper input form do the following:

  1. Look for logical groupings within the fields on the form. For instance, PROM, Preterm PROM, PIH, and Macrosomia are all Induction Indications.
  2. Look for relationships in your groupings. For instance, each Delivery can have one or more Induction Indications. When one entity (Delivery) in your database can have multiple of another entity (Induction Indications), this tells you there needs to be a separate table for this entity.
  3. When there are a series of check boxes on a form, and they all represent alternatives of the same thing, you should not create multiple Yes/No fields, but instead create a single field where the acceptable values include all the labels of the check boxes. If more than one of these values can be selected, then you need an intersection or linking table to implement the Many-To-Many relationship.
  4. Regardless of the formatting on the form, all dates should be entered as Date/Time fields in the database. You can format the output with the Format function to meet the users needs.
  5. Check with the users of the system to make sure you have accurately modeled the data and relationships of the system. Also work with them to create an application form that is easy to use. You may also have to work with them to redesign the paper form.

The developer has to delicately balance the needs of the users against proper database design techniques. In this way, the developer creates a system that is not only easy for the user, but also ensures accurate data.

Friday, September 29, 2017

Normalizing City, State, and Zip

Recently, I ran into a question on the internet about normalization that I thought would be good to repeat.


 I'm toying with the idea of starting a new project, so I'm in brainstorming mode for table design. I'll be recording customer information in this application. Typical stuff: First and Last Names, Company, Street, Apt, City State and Zip, Phone numbers(s) and extensions, E-mail.

How do you guys recommend setting up the tables for City State and Zip? I was thinking that I would have:

StateAbbr (Limited to 2 letters)

FKStateID (Lookup to TBL__State)

FKCityID (Lookup to TBL__City

My customer information then would record only the zip code (PKZipID). And I could then use queries for the state, city, and zip information for forms, reports, etc.

Or is this beyond overkill?


 By strict normalization theory, having City, State, and Zip in the same table violates the 3rd Normal Form because there are functional dependencies between those fields. However, functional dependencies are not all the same. There are strong dependencies and weak dependencies.

A strong dependency is one in which the value of a dependent field MUST be changed if another field is changed. For instance, suppose I have Quantity, Price, and ExtendedPrice, where ExtendedPrice is a calculation of the other two. If I change either Quantity or Price, the ExtendedPrice MUST be changed.

A weak dependency is one in which the value of a dependent field MAY be changed if another field is changed. City, State, and Zip are examples of weak dependencies. If I change a person's city, I may not have to change their state. They may have moved within the same state. Likewise, if I change the state, I may not have to change the city. There is, after all, a Grand Rapids, Michigan and Grand Rapids, Minnesota. The relationship between city and zip is even more complicated.

Now, it is possible to represent these fields in a fully normalized fashion, but I contend that it is more trouble for very little gain. There are two main reasons for normalizing data: minimize redundant data and maximize data integrity. Both of these can be achieved by using lookup tables for City and State without trying to represent the relationship between the two. A zip code could be mis-typed, of course, but it could also be mis-selected from a list, so to my mind there's no real reason to have a lookup table.

If you did normalize these fields, you could have a selection process that would present all possible combinations of values if you selected the City. For instance, if you had a combo box for City, you could have cascading combo boxes to select only the appropriate States and Zip codes. But it would be just as easy to mis-select the right value from this list as it would be to mis-select from independent lookup tables. And, of course, you'd have to create and maintain these relationships.

Therefore, normalizing City, State, and Zip adds a complication to your data model for very little gain, and in my opinion, is a good example of when to denormalize.


Wednesday, September 20, 2017

The Normal Forms: In a Nutshell

In this series, I have tried to explain in non-mathematical terms what the first three Normal Forms mean and how they determine database design.

This is not the most useful method of learning normalization. In fact, many expert developers never learn the formal definition of the normal forms. If you haven't already, I suggest you read the following series:

ER Diagramming

However, I think it is useful to know what the Normal Forms are. Sometimes when you get stuck in a design, you can go back to the definitions to get yourself out of trouble.

So, in summary:

First Normal Form (1NF) says that each record must be unique, that is, it has a primary key. There are some additional restrictions on how such uniqueness is maintained such as not allowing positional referencing and no repeated columns.

Second Normal Form (2NF) says that each field in the record must depend on the whole primary key, not just a part of it.

Third Normal Form (3NF) says that no field must depend on any other field except the primary key.

William Kent, author of A Simple Guide to Five Normal Forms in Relational Database Theory, once abbreviated the first three normal forms like this:

"The Key, the whole Key, and nothing but the Key, so help me Codd."

Wednesday, September 13, 2017

The Normal Forms: Third Normal Form

Last time, in The Normal Forms: Second Normal Form, I discussed how to remove redundant data by identifying fields which are not functionally dependant on the entire primary key. Figure 1 shows the results.

Figure 1: Order table Decomposed into Orders and Order Details

This corrected some data anomaly errors in my data, however, data anomalies are still possible under 2NF. To prevent these anomalies, I need an additional rule: Third Normal Form (3NF).


A table is said to be in Third Normal Form (3NF) if:

  1. It is in Second Normal Form and
  2. If all non-key fields are mutually independent, that is, all fields are functionally dependant ONLY on the primary key field(s).


There are two main sources of data anomalies that 3NF corrects are 1) Redundant Data and 2) Calculated fields.

Redundant Data

Although I removed some of the redundant data when I split the Order table into Orders and OrderDetails, there is still some redundancy left, namely ProductNum and Item. Both of these fields are dependant on the entire primary key, so they comply with 2NF. However The ProductNum and Item fields are mutually dependant, that is, they depend upon each other. The product number determines the item description and the item description determines the product number.

Just as we saw in 2NF, redundancy can lead to inconsistent data being entered into the database or correct information being changed after the fact. Figure 2 shows some data anomalies possible under 2NF as a result of redundant data.

Figure 2: 2NF Data Anomalies Due to Redundant Data

Product A7S has two different items associated with it: either a wrench or a nail. Which is it?

Also, two product numbers (B7G and B7H) are associated with an Item called "saw". Is this the same saw or not?

Calculated Values

Mutual dependency is also an issue with storing calculated values. Suppose I had a Quantity and Price field and I decided to calculate the ExtendedPrice by multiplying the them. This is a common database error made by novices.

The problem is one of dependency. The Extended Price calculation depends on the Quantity and Price fields for its value. 3NF says that no field should depend on any field except those making up the primary key.

If I store that calculation and later go back and change one of the dependant fields (either the Quantity or the Price), my calculation will be incorrect. Figure 3 shows some calculated values anomalies.

Figure 3: Anomalies with Calculated Values

First of all, if the user is manually calculating and typing in the value of the Extended Price, the value could be anything, even a calculation from a different row. So let's assume I have an automated process, a formula in a form which calculates the value.

The problem is that you must depend on programming to maintain your data integrity, not the database itself. If the integrity is maintained at the database level, it cannot be subverted.

In the case of the table above, the first anomalous record was caused by changing the Quantity from 1 to 2 after the fact. But because I didn't have a process to re-calculate the value if Quantity changed, the Extended Price is now wrong.

In the second case, an Update Query was used to raise the price of nails was raised from $0.09 to $0.10. Unfortunately, the query did not include a new calculation, so all Extended Price calculations for nails are now wrong.


The problem of calculated values is a simple one to solve. Don't. As a general rule, I just don't store calculations. There are minor exceptions, but in most cases, I'll be safe by just leaving them out. When I need these values, I'll calculate them as output in either a query, form, or report.

As with 2NF, the solution to redundant data is to remove it to a separate table, leaving one field to join back to the original. In this case, the ProductNum, Item, and Price fields will go into the Products table. I'll leave ProductNum in the Order Detail table to maintain the relationship. Figure 4 is the result.

Figure 4: Decomposing Order Details to Remove Redundant Data

So now I've removed as much redundant data as possible. There's still a little left. There always will be in order to maintain the relationships between tables. But none of the redundancy will result in data anomalies, so I can say with confidence that my tables are now normalized to Third Normal Form. Figure 5 shows the final design.

Figure 5: Final Design

In my next and final post: The Normal Forms: In A Nutshell, I'll wrap it all up.


Monday, August 28, 2017

The Normal Forms: Second Normal Form

Last time, in The Normal Forms: First Normal Form, I discussed the rules for the basic arrangement of data in a table. If you don't follow those rules, called the First Normal Form (1NF), you don't even have a table. But even if a table is normalized to 1NF, that doesn't mean it's perfect. Figure 1 shows a table normalized to 1NF.

Figure 1: Order Table - 1NF

The problem here is the danger of introducing errors, called data anomalies, into the table. Data anomalies can be introduced by operator error or through programming. Once you have a single data anomaly in your table, all of your data is suspect, so the remaining normal forms work to remove such data anomalies. Figure 2 shows the same table with data anomalies present.

Figure 2: Order Table with Data Anomalies Present

As you can see, Order 112 has two different customer numbers (444 and 445), which is correct? It is impossible to tell. In addition, both product numbers B7G and B7H are identified as a 'saw'. Are these the same product with different product numbers or different products with the same description? Again, I can't know based on the data in the database.

The root cause of these data anomalies is redundant data, that is, data that is repeated in multiple rows. So we need to minimize this redundant data as much as possible.

Now wait a second! Didn't I just say in the last post that I HAD to repeat the values? Yes I did. But that was to comply with 1NF, which is not the end of the story.


So let's look at the definition of Second Normal Form (2NF). A table is said to be in 2NF if:

  1. It is in 1NF.
  2. Every field is functionally dependant on the entire primary key, that is, it depends on the entire primary key for its value.

Functional Dependency

Before I can continue, I have to talk a bit about functional dependencies, because all of the remaining normal forms rely on this concept. Functional dependency speaks to the relationship that fields in a table have to each other. It is perhaps best explained by example.

Suppose there is an Employee table, and I am an entity in that table. There is a row that represents all the information about Roger Carlson with Social Security Number (SSN) acting as the primary key. Since all the fields in my row are information about me, and I am represented by the SSN, we can say that each field depends on SSN for its value. Another way to say it is that SSN implies the value of each of the other fields in my record.

If a different row is selected, with a different SSN, the values of all the other fields will change to represent that entity.


Second Normal Form says that all of the fields in a table must depend on the ENTIRE primary key. When there is a single primary key (like SSN), it is pretty simple. Each field must be a fact about the record. But when there is a compound primary key, it's possible that some fields may depend on just part of the primary key.

Going back to our Order Table example, Figure 3 shows these partial dependencies.

Figure 3: 1NF Orders Table showing dependencies

In order to uniquely identify the record, the primary key of this table is a combination of OrderNum and ProductNum (or Item, but a number is a better choice).

2NF says that each field must depend on the ENTIRE primary key. This is true for some fields: Quantity depends on both the OrderNum and ProductNum, so does Item. However, some fields do not.

Order 112 will be for customer 444 regardless of the product selected. The order date does not change when the product number changes either. These fields depend ONLY on the OrderNum field.

Since some fields do not depend on the entire primary key, it is not in Second Normal Form. So what do I do about it?


The solution is to remove those records, which do not depend on the entire primary key, to a separate table where they do. In the process, I remove the redundant or repeated data so there is just a single record for each. Figure 4 shows the process of decomposing the table into two tables.

Figure 4: 1NF Orders table Decomposed to 2NF

This corrects the data anomaly with the two different customers for the same order. However, I still have the problem of the product number and the item description. It's still possible for the same product to have different descriptions or different items sharing the same ProductNum, as Figure 5 illustrates.

Figure 5: Remaining data anomalies.

Product A7S is either a wrench or a nail, and a saw is either product number B7G or B7H.

To correct these problems, I need to add yet another normal form: Third Normal Form. I'll talk about that next.


Wednesday, August 16, 2017

The Normal Forms: First Normal Form (1NF)

In Normal Forms: Introduction, I introduced the topic of the Normal Forms, the theory behind the arrangement of fields into tables. Since it is usually best to start at the beginning, I'll begin with the First Normal Form.

The First Normal Form, or 1NF, is the very lowest, basic arrangement of fields in a table. If your table is not in 1NF, then it isn't really a table. Sadly, many novice databases are not even in 1NF.

A table is said to be in First Normal Form if:
1) there is no row or column order
2) each row (record) is unique
3) each row by column value (field) contains exactly one value
4) there are no repeating columns

What does this mean?

First of all, the operation of the table will be unaffected by the order the rows are in or the order the fields are within the row. It means that each record must be complete unto itself without referencing another row positionally, for example, the row above. Likewise the position of the fields is irrelevant.

Since each record is unique, it means there are no duplicate records. This uniqueness is defined by a field or combination of fields whose value will never be duplicated. This is called the primary key. In order to assure uniqueness, no part of a primary key may be NULL.

Because a field must have a single value, it cannot contain a list or compound value. One over looked consequence of this rule is that each field MUST have at least one value. If the value of the field is not known, it is said to be NULL. (There is some debate over whether NULL is actually a value. I maintain it is, but the discussion is largely semantic.)

Lastly, there are no repeating columns. Repeating columns are columns that store essentially the same information. They may be columns like Product1, Product2, Product3; or multiple Yes/No columns that represent the same information like each product having its own column (Saw, Hammer, Nails).


Let's take a look at how these rules are implemented and what they mean for table design.

Suppose I want a simple Order table with OrderNum, CustomerNum, OrderDate, Quantity, Item, and ProductNum. Although the definition of 1NF is fairly simple, it precludes a wide range of data arrangements. Let's take a look at some of these arrangements.

Figure 1 shows one way such data can be arranged.

Figure 1: Records with Missing Values

To make each record unique, the primary key would have to be OrderNum and Item. However, since no part of the primary key may be Null, this arrangement won't work. All the values of the primary key must be filled in.

But even more than this, the record is not "complete" unto itself. That is, it refers to other records for information. It's not that the values of OrderNum, CustomerNum, or OrderDate are unknown and therefore NULL. I do know the value, but I'm attempting to represent that data positionally. This, of course, violates the first rule (order is irrelevant) and rule 3 (each field must have a value).

This arrangement is common in spreadsheets and reports, but it is not sufficient for storing data.
Figure 2 shows another way the data can be arranged.

Figure 2: Information Stored In Lists

This violates rule 3. Each field must hold one and only one piece of information and not a list. It would be a nightmare to do anything with the date in the Item field other than simply display it because the database management system is designed to treat fields as indivisible.

While Figure 2 is an extreme example that mixes multiple fields in addition to multiple field values, believe it or not, I have also seen database designed like Figure 3:

Figure 3: Data stored in multiple lists

While this is better than Figure 2 (at least it does not mix fields), it is still not atomic and you'd have difficultly associating a quantity with a particular product.

Compound Values:
1NF also precludes compound values, things like full names in a single field or multi-part identification numbers.

Full Names

Why not store a full name? Roger J. Carlson is certainly pertinent information about me. However, it is not indivisible. It is made up of a first name, middle initial, and last name. Because I may want to access just pieces of it (using the first name in the salutation of a letter or sorting by last name), the information should be stored in separate fields.

Multi-part Numbers

Often, a database requirement is to have an identification number that is composed of different, meaningful parts. A serial number may have a four-digit product code, followed by the manufacture date (8 digits), and ended with the facility ID. It might look like this COMP02222008BMH. While this may be a useful arrangement for humans, it is useless in a database. Each value should be stored in a separate field. When the serial number is needed, it can be concatenated easily enough in a query, form, or report.

Figure 4 shows data that is stored in repeated columns.

Figure 4: Data Stored in Repeated Columns

This arrangement is common for people who use spreadsheets a lot. In fact, this is so common it is called "committing spreadsheet". The problem, in addition to having multiple columns, is that in order to associate a quantity with a product, you would have to do it positionally, breaking rule 1.

Lastly, another version of the Repeated Columns error is multiple Yes/No columns. Figure 5 illustrates that.

Figure 5: Data Stored in Yes/No Columns

At first blush, this does not seem to have the same problem, but all I've done is replace generic field names (Product1, Product2, etc) with specific ones (wench, saw, etc). It would be extremely easy to check a second field in any row and they you would have no idea which was correct.


As we've seen, First Normal Form precludes a lot of possible data arrangements. So what's left? There's really only one possibility left. Figure 6 shows it.

Figure 6: 1NF Correct with Repeated Rows

Each row has a unique identifier and there are no duplicates. Each field contains a single value. The position of the row and field is irrelevant, and lastly there are no repeating columns.

It's perfect. Right? Well, no. While this table does conform to 1NF, it is still has some problems; problems that 1NF is not equipped to handle. For those, I need to look at the Second Normal Form (2NF), which is what I'll do next time.


Tuesday, August 1, 2017

The Normal Forms: Introduction

Normalization is a methodology for minimizing redundancy in a database without losing information. It is the theory behind the arrangement of attributes into relations. The rules which govern these arrangements are called Normal Forms.

In What Is Normalization: Parts I, II, III, IV and V, I discussed the decomposition method of normalization, where you put all your fields into a single table and break the them down into smaller, normalized tables.

In Entity-Relationship Diagramming: Parts I, II, III, and IV, I discussed an alternate method which works from the bottom up. It takes the individual pieces of information (Attributes) and group them into logical groupings (Entities).

However, in neither case did I formally define or explain the Normal Forms. And that's for good reason. I find that only after people get a working understanding of normalization do they really understand the Normal Forms and what they imply. Therefore I usually leave them until last. If you haven't read the above mentioned serie, it would be worth your while to do so.

Normalization was first developed by E. F. Codd, the father of Relational Database theory. He created a series of "Normal Forms", which mathematically defined the rules for normalization. Each normal form is "stronger" than the previous, that is, they build upon the previous normal forms. Second Normal Form (2NF) encompasses all the rules of First Normal Form (1NF) plus adding its own rules. Third Normal Form encompasses all of 1NF and 2NF, and so on.

Figure 1: Each Normal Form Encompasses the Previous

In order, the normal forms are: First Normal Form (1NF), Second Normal Form (2NF), Third Normal Form (3NF), Boyce-Codd Normal Form (BCNF), Fourth Normal Form (BCNF), Fifth Normal Form (5NF), and Domain Key Normal Form (DKNF). BCNF comes between 3NF and 4NF because it was developed later, but because of its "strength" belonged between 3NF and 4NF.

Since each normal form encompasses all previous forms, in theory, the higher the normal form, the "better" the database.

In practice, however, normalizing to the first three normals form will avoid the vast majority of database design problems. So it is generally agreed that to be properly normalized, most databases must be in 3NF.

Beyond 3NF, the normal forms become increasingly specialized. Boyce-Codd Normal form and Fourth Normal Form were created to handle special situations. Fifth Normal Form and Domain-Key Normal Form are largely of theoretical intererst and little used in practical design.

So what I'm going to do for this series is limit myself to the first three normal forms, giving their definitions, implications for data, and how to implement them.

In my next post, I'll start with the First Normal Form.

Friday, June 23, 2017

Entity-Relationship Diagramming: Part IV

In Entity-Relationship Parts I, II, and III, I described a method of normalization that builds up from individual data elements. Following this method results in a database design that is normalized to the Third Normal Form -- without having to know what the Normal Forms are.

(I haven't discussed the Normal Forms. Perhaps I will in a later post.)

In Part III, I left off with the completed Entity-Relationship Diagram shown in Figure 1.

Figure 1: Employee Database E-R Diagram

When I do E-R diagramming, I like to ignore the attributes, concentrating instead on just the entities and their relationships. But at some point, obviously, the attributes have to come back into it. I like to save that to the end.

There are some good E-R diagramming tools available out there, like Visio and ERWin, but for small projects, I usually just implement the tables directly in Access and create the Relationships there. The Access Relationship Window works pretty well as a diagramming tool, and you've got a completed database at the end.

Implementing an E-R Diagram

So far, I've talked about Entities and Attributes. I do that deliberately to keep myself from thinking about implementation issues while I'm designing the data model. But at the implementation phase, entities become tables and attributes become fields.

The first thing I do is add an Autonumber, Primary Key field (Surrogate Key), to each table, naming it after the table with a suffix of "ID". Thus the tblEmployee table will have a field EmployeeID. Then I create a Unique Index on the Natural Key I identified during E-R diagramming process.

At this point, my tables look like Figure 2.

Figure 2: Tables with Surrogate Keys Added

Now it's time to look at my relationships. Relationships are created on fields holding related information, Primary Key to Foreign Key. In a One-to-Many (1:M) relationship, the primary key of the table on the "One" side is added to the table on the "Many" side table and becomes the foreign key.

Since there is a 1:M relationship between tblEmployee and tblEmployeeCertification, I'll put EmployeeID in EmployeeCertification and it will become the foreign key for tblEmployee.

I'll do the same with every relationship. The resultant tables look like Figure 3.

Figure 3: Tables with Foreign Keys Added

Lastly, I need to create the actual relationships. I do this by clicking and dragging the primary key of one table into its corresponding foreign key in the other. For instance, I'll click on EmployeeID in tblEmployee and drag it to EmployeeID in tblJobHistory.

When I do, I will get an "Edit Relationships" dialog box shown in Figure 4.

Figure 4: Edit Relationships Dialog Box

The Referential Integrity box will not be checked by default. However, you MUST check it in order to create a true relationship. Without it, Access will create a line between the tables, but it will NOT be a relationship.

When you click the Referential Integrity box, Access should correctly identify the relationship as a One-to-Many relationship. If it does not, if it says Indeterminate, you've done something wrong.

Clicking OK results in the relationship being created. Figure 5 shows that.

Figure 5: Relationship Between tblEmployee and tblJobHistory.

Now, I'll just do that for the rest of the relationships, and I'm done.

Figure 6: Completed Data Model

The process I've outlined here is really just the bare bones. If you're interested in seeing examples in more detail, you can find some in the Tutorials: Database Design section of my website.


Tuesday, June 13, 2017

Entity-Relationship Diagramming: Part III

In this Entity-Relationship series, I'm attempting to present an alternative to standard, decomposition-style normalization.

In Entity-Relationship Diagramming: Part I, I grouped all the objects in my business narrative into either Entities or Attributes. Entities are groups of Attributes and Attribute describe Entities.

In Entity-Relationship Diagramming Part II, I defined the relationships between my entities. Relationships store information about how your Entities interact. Figure 1 shows where I left off.

Figure 1: Relationships Between Entities.

Unfortunately, I'm not done yet, for two reasons: 1) as I said in What is Normalization: Part V, many-to-many relationships cannot be directly implemented in a relational database, and 2) I still have an unassigned attribute. So first I'll rationalize the many-to-many relationship and then take another look.

To rationalize a many-to-many relationship between two tables, you create a third table -- an "intersection" or "linking" table. Then you create one-to-many relationships between the linking table and each of the main tables, with the "many-side" of both relationships on the linking table.

As you can see above, Employee and Certifications have a many-to-many relationship, so I need to create a new entity (Employee/Certifications). Sometimes linking tables have logical names. Other times, they don't. In that case, I simply combine the names of the base tables.

Figure 2 shows how the rationalized relationship is diagramed.

Figure 2: Rationalized Employee-Certification Relationship

Now I can see where to put my unassigned Certification Date field. The Employee/Certification entity represents a certification for a particular employee and that can be given at only one time. Therefore the Certification Date field goes in this new entity. Figure 3 shows the completed Attribute Grid.

Figure 3: Final Attribute Grid

Now that I've got all the relationships between my entities identified and assigned all the attributes, I can put it all into one diagram.

Figure 4: Final E-R Diagram

Now I've got all the pieces. All that's left is to implement my diagram in Access (or some other database). In Entity-Relationship Diagramming: Part IV, I'll do just that.


Tuesday, June 6, 2017

Entity-Relationship Diagramming: Part II

In my last post, Entity-Relationships Part I, I approached the idea of normalization from a different point of view. Rather than breaking a large table into smaller, normalized tables, the Entity-Relationship process starts with the individual pieces and builds it up into a properly designed data model. It uses objects (Entities and their Attributes) and looks at the real-world Relationships that exist between them. The end result of this is an Entity-Relationship Diagram.

I started the process by writing a short narrative of the database requirements and grouping all the nouns into related categories. A few could not be assigned. Figure 1 shows the resultant grid.

Figure 1: Entity-Attribute Grid

At this point, I should say that this is not a strictly linear process . That is, you can't always move smoothly from one step to the next. Sometimes you have to move back and forth between them as you discover more things about your system.

That's what I'm going to do next. Because I have an unassigned attribute, I'm going to look at the relationships between my existing entities and see if something doesn't present itself.

To look at the relationships, I'm going to ignore the attributes for a while. Attributes do not have relationships, only entities do. If you discover that an attribute does have a relationship with some other entity or attribute, that's an indication that it is really an entity and your grid must change.

First, I'll determine the relationships between existing entities. In What Is Normalization, Part IV , I describe the different types of relationships: One-to-One (1:1), One-to-Many (1:M), and Many-to-Many (M:M). Then I'll diagram them in pairs.

The diagramming technique I'll use is one of the simplest. Entities are represented by boxes . Relationships between entities are represented by lines. "Crow's feet" show a "many-side" relationship and a vertical bar shows a "one-side" relationship. Figure 2 shows how the relationship types are represented.

Figure 2: Relationship Types

Many-to-Many: Common in real life, but cannot be represented in a database.
One-to-Many: The most common relationship in a database.

One-to-One: Seldom used.

So how do I know what the relationships are for my Employee Database? For that I need to go back to the narrative. The second paragraph describes "business rules", that is, how the business actually works. I'll repeat the paragraph here.

An employee can have multiple jobs over time, (ie, Analyst, Sr. Analyst, QA Administrator). Employees can also earn certifications necessary for their job.

From this I can write out the relationships in full sentences, and I find it useful to write them in both directions. For instance, from the narrative, I can say:

Each Employee can have One or More Job History instance
Each Job History instance can be for One and Only One Employee.

This is a classic One-to-Many relationship and we will diagram it as follows:

Job-Job History
Each Job can have One or More Job History instance
Each Job History instance can be for One and Only One Job.

Another One-to-Many relationship diagrammed as follows: Employee-Certifications
Each Employee can attain One or More Certifications
Each Certification can be earned by One or More Employees

This is a Many-to-Many relationship, and this will need additional work, but I'll leave it for now.
It might seem that there is no relationship between Job and Certifications, except for the word "necessary" in the narrative. In going back to the client, I discovered that certain certifications are necessary in order to be promoted to some jobs. Thus there IS a relationship as follows:

Each Job requires One or More Certification
Each Certification is for One and Only One Job

Another a One-To-Many relationship Each certification is only for a single job. It is important to verify this with the client. If a certification were required for many different jobs, the model would be different, but in this case it is for only one job. The certification requirements for Analyst I are different than for Analyst II.

These sentences are extremely useful when communicating with your client (or even if you're developing the database for yourself). If the client can say that each of the sentences is strictly true, then you'll know you've designed it correctly. If any of the sentences are wrong, you have to go back and re-evaluate your design.

Unfortunately, as I said in What Is Normalization, Part V , many-to-many relationships cannot be represented directly in a relational database. So I'm going to have to add some intersection (or "linking") tables.

I'll take care of that next in Entity-Relationship Diagramming: Part III


Wednesday, May 31, 2017

Entity-Relationship Diagramming: Part I

In the last series, Normalization Parts I, II, III, IV, and V, I approached normalization from a particular point of view. I put all the information into a single table then removed redundancies into separate tables. This method is called decomposition. Decomposition is fine for understanding the theory of normalization and for creating small databases. However, it is less useful for large databases. At least, I've found it so.

So I'm going to talk about another way to approach normalization that starts with the individual pieces and builds it up into properly normalized tables. This method is called the Entity-Relationship method and the final result is an Entity-Relationship Diagram. An E-R diagram is useful not just for creating the data model, but for documenting it as well.

Since we've been working with the Employee Database in our other examples, let's stick with it. But since I claimed that E-R method works for more complicated designs, let's make it a little more complex. I like to start with a short narrative of the requirements.


ZYX Laboratories requires an employee tracking database. They want to track information about employees, the employee's job history, and their certifications. Employee information includes first name, middle initial, last name, social security number, address, city, state, zip, home phone, cell phone, email address. Job history would include job title, job description, pay grade, pay range, salary, and date of promotion. For certifications, they want certification type and date achieved.

An employee can have multiple jobs over time, (ie, Analyst, Sr. Analyst, QA Administrator). Employees can also earn certifications necessary for their job.

Next, I take the narrative and underline all of the nouns. (I'm going to bold them here because Blogger doesn't have underlining.)

Narrative 2

ZYX Laboratories requires an employee tracking database. They want to track information about employees, the employee's job history, and their certifications. Employee information includes first name, middle initial, last name, social security number, address, city, state, zip, home phone, cell phone, email address. Job history would include job title, job
, pay grade, pay range, salary, and date of promotion.
For certifications, they want certification type and date achieved.

An employee can have multiple jobs over time, (ie, Analyst, Sr. Analyst, QA Administrator). Employees can also earn certifications necessary for their job.

All of these nouns must be represented in the database -- some as Entities and some as Attributes. An Entity is a "thing" about which we store information. An Attribute is the information that is being stored.

So the next task is to group the nouns into logical groupings. At first pass, it appears that there are three entities: Employee Information, Job History, and Certification. It is useful at this point to put them in a grid and assign the rest of the attributes like Figure 1.

Figure 1: Attribute Grid

Next, I need to assign primary keys to each entity. (At this point, I call them "entities" rather than "tables".) As I said in What Is A Primary Key?, a primary key is a field or fields which uniquely identify a record. At this point, I'm dealing only with natural keys. Figure 2 shows the primary keys.

Figure 2: Attribute Grid with Primary Keys

Now we go back to our entities and look for two things:

  1. see if each attribute can have only one value for that entity, and
  2. see if each attribute truly belongs to the entity.

To determine the second, we look at "functional dependencies", which just means that each field's value depends on the value of the primary key.

So, for the Employee table, a person (as represented by the SS#) can have only one first name, last name, address, home phone, and so forth. That satisfies requirement #1. Secondly, if the value of the SS# changes, then so will all of those values. By that, I mean if we move to a different entity with a different SS#, that entity will have a different first name, last name, etc. (For our purposes here, we will assume that no two employees share any of these attributes.)

Now, what about the Job History table? Any time an entity has a compound primary key, you should look at it very closely to make sure all the fields depend on the entire primary key. Any particular job can have only one description, pay grade, and pay range. However, none of those depend on the Promotion Date.

I've got a problem here and I need to take another look. What I really have is information about two different "things".

Job Title, Description, Pay Range and Pay Grade pertain to the Job as a category. Everyone who holds that position will have the same values. On the other hand, Salary and Promotion Date will be different for each person. So I really have two entities: 1) Job (information about the job itself), and 2) Job History (information about a particular employee's employment history.

I need take Job Title, Description, Pay Range and Pay Grade out of the Job History table and put them in the Job table.

Lastly, in the Certification table, Certification Date is also not fully dependant on the Certification Type. Different individuals achieve the certification at different dates. I don't have an entity to put the date in, so I'll put that to the side and come back to it later.

Figure 3 shows the amended grid.

Figure 3: Amended Attribute Grid

At this point, it is useful to look at the Relationships between these Entities. We may be able to find a place to put our unassigned attribute. But I'll save that for the next post: Entity-Relationship Diagramming: Part II.