Aspect – Oriented Programming Overview

Aspect-Oriented Programming (AOP) is a set of methodologies, tools, and approaches that attempt to improve modularity by allowing the separation of cross-cutting concerns. A cross-cutting concern is a part of an application that affect other areas of the program. They generally can not be cleanly decoupled from the other parts of the system and tend to result in either some duplication of code or deep dependencies between the various areas of the system. The cross-cutting concerns are quite common, an enterprise or near-enterprise level application will always have at least one, if not most of these concerns. Typically these items are used the same way throughout the different areas of the software, and include examples such as:

  • Synchronization
  • Error detection and correction
  • Data validation
  • Persistence
  • Transaction processing
  • Internationalization and localization which includes Language localization
  • Information security
  • Caching
  • Logging
  • Monitoring

If using AOP in an Object-Oriented (OO) language, it is generally required to create separate objects/classes to manage these cross-cutting concerns.  This means that your code base will likely have a section of code that is relegated to providing the classes and methods that are accessed from many other classes.  This is certainly something that we are all used to seeing, so it almost seems natural and we can generally loosen our object-oriented paradigms enough to make ourselves think that this is an “ok solution.”  And it is.  It works.  It provides modularity.  However, it also puts too much intelligence into those classes that are calling that shared code.

A common workflow that demonstrates this is when a new user registers for a website.  The information is saved in the local system’s database.  Then an email is sent to the user welcoming them to the site.  A message is then sent to the email-list management application to enroll the user into the mailing list.  There is also logging going on before each major step.  This generally looks something like:

Example of OO code with cross-cutting items

Example of OO code with cross-cutting items

There is nothing really obnoxious about this example, however it shows that there are things going on this method that are really not a concern of the User class.  This is a convenient place to put this work, but is it correct?  Another solution is to put a Managing class that calls the User.Save method and then handles the orchestration of managing the rest of the work, something along the lines of a UserManager.Save().  However, that means you then likely have a less intuitive route into creating a new user and have implemented a class that will easily become an example of an OO anti-pattern (Anemic Domain Object).

So how can you solve both the requirement that this business process be supported AND not implement any anti-patterns?  Is it even possible?

Yupp, it is.  One way is through the use of an AOP approach.

Let’s start with some of the AOP vocabulary.  It is not necessary for implementation, but more for communication.

Join points - Join Points are those areas in the code that have the cross-cutting concerns.  In the example above it includes the EmailManager, the LoggingManager, and the MailingListClient.  These points are those areas where it makes sense to add additional functionality.

Point cuts – Point cuts provide a way to determine whether a certain set of functionality matches a particular join point, basically linking a join point to a separate set of functionality.

Advice – Information about whether the code should be running before, during, or after a join point.  This code only fires, however, when the point cut is invoked.

Yes, I know this is a confusing set of explanations at this point. At this point let us leave it that join points mark the areas where other stuff can be done, point cuts provide a way to do that other stuff, and advice gives information on when to do that other stuff.

When working in a fully Aspect-Oriented language that is all you need. What this does is tell the compiler how to link the various modules together. In a AO compiler this is called weaving, and is when the compiler knows, through the join points and point cuts what code needs to be weaved from a cross-concern module into a core module (the User class shown above, for example). Weaving this code creates a single file where all of the cross-concerns are managed in a different module yet is considered to be part of the base executable. In short, it is a way to create a compiler “trick” so that the cross-concern code is folded into the core code as marked by the join points.

Can this be done in .NET? Next post will talk about that.

Is Object-Oriented the Ultimate Design Approach for Software Development?

If so, then we are all screwed. Object-oriented (OO) programming was popularized with the Smalltalk language (which has been around for over 30 years) and is still going strong in modern languages such as C# and Java. In OO, the critical feature is the necessity of creating an object or class that is supposed to describe a thing; in other words a noun. This object, or noun, has a set of properties that describe the noun; think of them as adjectives. Thus a noun, cat, has a set of adjectives that describe it: color, breed, height, weight, meow type, degree of aloofness, and desire to claw the couch. The combination of those adjectives, or properties, gives an accurate understanding of the object, or noun, that is being described.

OO then expects this object, or noun, to contain a set of methods, or verbs, that describe the things that can affect the properties, or adjectives, of the class, or noun. In an ideal OO world, what you have is a single noun with multiple verbs that describe everything that can affect the noun. As you start to approach it from a more semantic and less “software development-based” approach, the construct starts to fall apart. Verbs can affect the adjectives of the noun, such as the verb action of running affecting the noun Joe’s adjective, or state of tiredness. However, they are not the only way to affect the noun. Other nouns, performing other verbs, affect the initial noun through interactions. This interaction is denied in a pure object-oriented approach as an object is responsible for all information about itself and all the actions taken on it. This is not a viable approach.

The main problem I have with OO is its lack of acknowledgement of context. To make anything useful requires context. Many of the adjectives for a noun have no meaning to the noun itself nor does the noun realistically have any way to control them, yet, in an OO solution the noun is responsible for setting and maintaining them. Is this realistic? And isn’t that what we are trying to do? Mimic reality as a way to make the software development process understandable? An object that represents me may include adjectives such as tall and ruggedly handsome. However, I have no way of controlling them. They are pretty much independent of anything I can do to manage them. This is where OO lacks context. While there may be properties that describe an object, they are not something that the object should be able to manage. Along with there being properties that a class shouldn’t manage, there are properties that a class doesn’t care about. If a property is never used in business logic, then intrinsically it is not something that matters in an OO world. This is why the OO design decision to require the mixing of data with business logic is flawed. Martin Fowler refers to an approach where data and business logic are separate as an anti-pattern, or something to be avoided when using an OO approach to programming (even using the word horrifying – http://www.martinfowler.com/bliki/AnemicDomainModel.html). However, I respectfully disagree. There certainly is some business logic that makes sense to stay with the data; that logic that is completely around the setting of some properties and maintaining some relationships between internal state – the changing of one property may affect another property.

With that in mind, however, there is a different set of business logic that should be separated from the object itself. When you look at a physical representation of a car, it can really do nothing on its own. It deals with some things on its own, internally, but much of the real usefulness of the item is handled from a manager outside of the object reacting to various states of the object. This is the antithesis of OO, which tries to encapsulate and hide the state of an object as in reality much of the decisions that are made about an object are based on the external views of that object’s state. This means that to give external manipulators more, real, understanding of an object we have to expose more of that state, not less. OOs secretive nature means that you have to violate fundamental principles to allow the business to function.

Some of the very features that we brought into software development by OO also shows how the approach is an incomplete representation of reality. When you look at the class structure of an enterprise level application you will find that it is riddled with code that has nothing to do with representing reality. Inheritance, for example, is an awesome feature that helps to increase code reuse. However, it also requires the creation of multiple levels of unreal objects, basically sub-objects that do common work on common properties. I know the argument about how if you look at from the top down, these subclasses aren’t really noticed; much like how the digestive system of a cat is intrinsically part of the beastie itself even though it can be examined separately and reused/inherited between different classes of animals. However, when you look at it from the side it shows a completely different representation; a stack of strangely-named classes that group some subset of work. It is much like an iceberg; when viewed from the top it is a beautiful representation of nature’s majesticness (I know its not a word, but say it out loud and it fits anyway). When this iceberg is looked at in whole it is a potential ship-killer. It also has the possibility of releasing city-destroying monsters when broken apart, as anyone who has refactored a multi-tiered class knows.

Am I saying to throw OO away? No, I am most certainly not saying that. There are many useful approaches that have come out of OO-based designs. However, it is critical to realize when the tenets of OO get in the way of creating a real-life working application that fully-supports the needs of the business. Sticking to a “pure OO approach” will lead to complex and ultimately unwieldy codebases. This is acknowledged by the industry as there is a steady increase in the amount of other approaches that take some of the good points of OO yet try to fix the flaws. The rise of languages such as F# which take a Functional approach to application construction, or the addition of constructs into base frameworks (such as Unity for .NET and AspectJ for J2EE) to allow for Aspect-Oriented Programming show how modern thinking has evolved different and better ways to solve software engineering problems. Lockstep obedience to the tenets of OO is a major problem in the industry. When other aspects of the business praise the ability to “think out of the box” yet it gets damned in software development, you know there is a fundamental problem. As practitioners of software development we should always be working to increase our abilities to do our jobs better and more efficiently. This means we need to evolve with the technologies around us. Don’t be afraid to learn about and try other approaches, you may find them more constructive ways of solving long-standing problems. If you aren’t having any cognitive dissonance you aren’t learning. Think about how the different approaches could be merged – why does it have to be one or the other? How can the best of all approaches be used together? Those are the kinds of things we should be thinking about. Stay tuned. The next set of articles is on Aspect-Oriented Programming. Perhaps these will help you to understand how all of the approaches can play in the sandbox together.

ASP.NET Bundling: Fewer is better

The ASP.NET MVC bundling feature enables you to create a single file from multiple files and can be done on CSS, JavaScript, and custom bundles. By itself, bundling does not reduce the amount of data being downloaded (although the ASP.NET implementation of bundling provides minification as well, but that is another article). Instead, it is designed to limit the number of connections needed for downloading files. This is important because modern browsers only allow 6 simultaneous connections to the same hostname/server. There are even fewer connections when using a mobile browser or are trying to connect over dial-up or over a VPN. Even when using a broadband connection a list of 20 files that need to be downloaded will require all connections 3 times. By putting all of the files into a single bundle you only use one connection, allowing the other connections to be used to download items such as graphics or other items/objects. If you already have a minimal number of external files you are downloading, there is no need for bundling, but you should consider bundling if you have a lot of add-ins.

There is a cost to using bundling, however. Although you will save some download time, this savings is realized only the first time the file is downloaded. The browser generally caches the information as it comes down, so it is not downloaded on every visit. However, by bundling multiple scripts into a single file, you have slightly increased the amount of time it takes to find the necessary function or other item from within that file, and this increase takes place every time the file is accessed, not just the first time it is downloaded. You get a onetime gain in download speed for some continual impact on access performance. It becomes a balancing act as you determine which scripts make sense to be bundled together and how many to bundle together, until you start seeing a discernible impact on the performance on the client side.

Updating one file in a bundle ensures that a new token is generated for the bundle query string parameter. This change means that the bundle must be downloaded the next time a client requests a page containing that particular bundle. When not using bundling, each asset is listed in the page individually, so that only the changed file would get downloaded by the browser. This implies that files that change a lot may not necessarily be the most suited for bundling.

The Bundle class Include method takes an array of strings, where each string is a virtual path to the resource. The Bundle class IncludeDirectory method is provided to add all the files in a directory (and optionally all subdirectories) which match a search pattern. If you determine that your application will benefit from bundling, you can create bundles in the BundleConfig.cs file with the following code:

    bundles.Add(
        new ScriptBundle("~/bundles/myBundle").Include(
        "~/Scripts/myScript1.js",
        "~/Scripts/myScript2.js",
        "~/Scripts/myScript3.js")
        );

With this code you are telling the server to create a new script, myBundle, made up of myScript1.js, myScript2.js, and myScript3.js; and add this new bundle to the bundle collection. The bundle collection is a set of the bundles that are available to your application. Although you can refer to the new script in a direct script link, just as you would one of the scripts being bundled, the bundle functionality gives you another path to put this script into your page:

    @BundleTable.Bundles.ResolveBundleUrl("~/bundles/myBundle")

This code not only has the benefit of creating the script link for you but it also has the added benefit of generating the hashtag for the script. This means the browser will store the script longer and the client will have to download it fewer times. With the hashtag, browsers get the new script only if the hashtag is different, such as when content in the bundle is completed, or if it hits the internal expiration date, which is generally one year.

Bundles are referenced in views using the Render method, ( @Styles.Render for CSS and @Scripts.Render for JavaScript). The following markup from the Views\Shared\_Layout.cshtml file shows how the default ASP.NET Internet project views reference CSS and JavaScript bundles.

    <!DOCTYPE html>
    <html lang="en">
        <head>
            @Styles.Render("~/Content/themes/base/css", "~/Content/css")
            @Scripts.Render("~/bundles/modernizr")
        </head>
        <body>
            @Scripts.Render("~/bundles/jquery")
            @RenderSection("scripts", required: false)
        </body>
    </html>

Using bundling in the framework has other advantages as well. If you have both debug and minified versions of a script in your project in the same directory the framework will make a decision on which version to include based on your DEBUG mode (though you only need to provide minified versions for those cases where the default ASP.NET minification does not work). If you are using the built-in minification, by adding the default version of the script to a bundle you can send a minified version when in RELEASE mode and a regular, un-minified, version of the script when in DEBUG mode.

The framework also understands the difference between script.debug.js and script.js. This means that when you have both files in the same folder, and script.js is referenced in your bundle; running in DEBUG mode ensures that the framework will pick up the .debug version of the file and use that rather than the non-debug version; perhaps this version has alert windows in crucial areas, or other items to aid in the debugging of the application. Debugging the application in non-DEBUG mode is more complicated, however, as any errors that are thrown relate to the line number in the bundled file and not the files that you work with while in DEBUG mode.

If you want to put the application in DEBUG mode, set the debug attribute in the compilation element of the web.config to true as shown below:

    <configuration>
        <system.web>
            <compilation debug="true" />
        </system.web>
    </configuration>
 

Bundling is a feature that is provided by the ASP.NET framework that allows the combination of multiple files into a single file automatically. This helps limit the amount of files that need to be downloaded from the server and can enhance performance because it eliminates the need to make multiple connections with the server. It also supports automatically selecting the appropriate script type, whether DEBUG or RELEASE, based on the compilation node within the web.config file. With this robust and built-on functionality it is difficult to conceive of a time when a JavaScript-rich site would not benefit from its use. Be fruitful and de-multiply…

Book Review: Mastering ESL and Bilingual Methods: Differentiated Instruction for Cultural and Linguistically Diverse (CLD) Students (with Myeducationkit)

This is a book review for Mastering ESL and Bilingual Methods: Differentiated Instruction for Cultural and Linguistically Diverse (CLD) Students (with Myeducationkit) by Socorro G. Herrera and Kevin G. Murry.

A strong entry in the study of working with ESL and CLD students. It started with information on the various practices and then went into the theories behind it, ending in a chapter on professional practice. Personally I would have preferred the opposite approach; going from a higher level and then digging down into the practical aspects of applying the information. This is especially true since I do not approach this as a classroom teacher but rather as a technologist that wants to ensure that all the technology that I am a part of creating is grounded in what really works, not necessarily what is “cool” or “fun.”

I found this subject matter especially applicable to technology creation, as the main theme of this book is to ensure the culturally and linguistic diverse (CLD) students are properly motivated, encouraged, and accommodated to ensure that they learn both English and the subject matter. A true universal application has to keep this same concept in mind. As technologists we make the same mistake, and end up building products that appeal “to people like us” no matter what our anticipated audience may be. Instead, as Herrera and Murry talk about, we should be making sure that our work outcome will accommodate, encourage, motivate, and instruct as many people as possible, regardless of their cultural or linguistic background.

I was completely underwhelmed with the MyEducationKit from Pearson. It was difficult to use and most of the time would not render correctly in a browser. Several of the videos would not play in the browser as well, instead having to be downloaded and ran locally. With the YouTubes (or even HTML5) of the world this was ridiculous, and as a technology professional I found it almost personally offensive.

All in all – the book was good, the extra money for the education kit was a waste.

Code Contracts – Invariant

Code contracts are a way to enforce conditions within your code. The last article discussed preconditions and postconditions, which are ways of managing validation on parameters being passed into a method (precondition) and the return values from your method (postcondition).

There is one other primary type of code contract; Invariant. An Invariant contract continuously checks on a class and determines that it is in a correct state whenever work is being performed. To perform this check you will need to create a single method that manages this check. This method can be called anything you want; the contracts subsystem will know what it is for because it has been decorated with the ContractInvariantMethod attribute. In this method you need to manage any business rules that affect the validity of your object. When you have in invariant contract the only time the application can violate these rules is when it is doing work in private methods.

In the following sample, the rule that is to be enforced is that there will never be a time when the Id of the item be less than 0 other than within a private method:

[ContractInvariantMethod]
protected void ManageInvariant()
{
System.Diagnostics.Contract.Invariant(this.Id < 0);
}

The main consideration is when should you use an Invariant? As you can imagine, having a single method responsible for managing the correctness of state for the entire object implies considerable domain knowledge packed into a single place. Thus, it is a potentially key piece of functionality when you are designing your application following the Domain-driven Design (DDD) methodology as DDD strongly recommends that you never work with an object that is in an invalid state. That recommendation is why Invariant contracts were created, to help the .NET framework support DDD.

Does that mean that Invariant contracts should be in every object in your domain? Of course not! Plain-old C# objects (POCO), for example, will rarely need any kind of invariants because they are designed to be a grouping of like data and by definition will not contain any business logic or rules. Domain objects, on the other hand, do contain business logic so are more likely to need invariants.

As you look deeper into the need for invariants you should consider any properties that are set within methods in the class. This consideration should help you determine what properties should be continuously validated. A CreatedDate property, for example, may have a business need for it to never be null. That would be a prime example of a value that should have an invariant check. Other examples could include an EmailAddress or Username for a user object or a Product somehow attached to an order object. Does it make sense to have the primary object if it is missing that piece of information? If it doesn’t, you have a candidate for an invariant.

There is some configuration that you have to do to enable the use of invariant contracts. The image below demonstrates the Code Contracts UI that is added when the Extensions plug-in is loaded:

CodeContractInvariant

You need to ensure that “Perform Runtime Contract Checking” is set to Full else invariant contracts will not be enforced.

This whole discussion has been around the use of code contracts in the .NET framework.  This means that they are also available in the other aspects of the framework including WPF and ASP.NET.  Using code contracts in the models within an ASP.NET MVC is a common usage, as is within the models of an MVVM WPF application.  The framework supports it wherever you will need the ability to ensure a correct and valid object.  Use it where it makes sense.

Code Contracts – Postconditions and Preconditions

Code contracts, which were introduced in .NET Framework 4.0, enable a developer to publish various conditions that are necessary within an application. They can be installed through the Visual Studio Gallery.  Code contracts involve the following:

• Preconditions – Conditions that have to be fulfilled before a method can execute
• Invariants – Conditions that do not change during the execution of a method
• Postconditions – Conditions that that are verified upon completion of the method

Using code contracts requires a different approach for managing exception flow within an application. Some code you ordinarily write, such as ensuring that a returned object is not null, will be handled in the method for you by the code contract. Rather than validating everything that is returned from a method call, you ensure that everything entering and leaving your methods are correct.  This is a fundamental shift in how a lot of classic applications are written as the basis of trust moves into the method that is being called.  This means that validation of the returned value does not need to be carried out in the calling method.

Before code contracts, a standard way to manage validation within a method was to perform a validation check, throwing an ArgumentException when a particular parameter is invalid. This is shown below:

internal Article GetArticle(int id)
{
if (id <= 0) { throw new ArgumentException("id"); }
// some work here
}

This code checks to ensure that the incoming argument is greater than 0 to represent a valid Id. If the code determines that the value is incorrect, it throws an ArgumentException with the name of the parameter that failed. Using contracts to perform this check enables consumers of the methods to get some information about the expectations of the method. The code below provides both Preconditions (Requires) and Postconditions (Ensures).

internal Article GetArticle(int id)
{
System.Diagnostics.Contracts.Contract.Requires(id > 0);
System.Diagnostics.Contracts.Contract.Ensures(Contract.Results() != null);
// some work here
}

This is powerful because it provides you the ability to get more distinct information directly from the debugger as shown below.

CodeContractDebugger

Not only does the use of Code Contracts provide a more accurate message when the validation fails, it also provides constant messaging as to what the contracts expect, as demonstrated in the screenshot below that demonstrates the messages created during the build process.

CodeContractMessages

You can trap these kinds of errors as needed as well as get detailed information about the validation issues.  Precondition and postcondition contracts are straightforward and concern themselves with incoming parameters and the return value of the method, respectively.  Invariants are a different breed, as they are concerned with the lifetime of the object.  Using invariant contracts are the subject of the next post.