An open letter to recruiters

Dear recruiters,

Before I say anything I have to admit, I have met some very good recruiters in my professional life, people as passionate about their jobs as devs themselves. If you are reading this you know who you are mostly because you enjoy your work and you feel like you’re helping both the employee and the employer to bond in a relationship that works for both of them. I’m well aware I’m not a recruiter myself nor I intend to tell any of you how to do your work, but considering the state things are today I thought it was worth to voice my ideas and hope somebody will listen to them and hopefully find them useful or in the worse case scenario they’d serve as catharsis for all the complains I’ve had on my head for a while about the recruitment industry.
Now, let’s take a step back for a second, there is nothing more exciting that when you’re looking for a job and you get a reply back from that ad in the company you liked. You do your homework to prepare for the interview, what they do, how they do it, although if it’s that company that you really want, you’ll know all of that already. Now let’s look at it on the other side shall we? You are the recruiter looking to hire the best talent the market has and if you are recruiting in IT, it’s a tough game out there, everybody wants IT people these days, sometimes infrastructure, dev ops, .NET devs (yep, that’s me!) and the list goes on and on. However, pretty much like sending my CV to that company I really wanted to work for or visiting the in laws for the first time, you usually get one shot and here is where my tone might begin to sound like a rant. If I get an email offering me a job in a field I’ve never worked on or expressed any interest in a several years long career, chances is I will think you never read my CV or cared about how good (or awful) I can be on that role, this speaks poorly for you because you are not caring about giving me a role that will work for my career or giving your client a candidate that is what they’re looking for, you just want to give it a shot and see if you get the commission.
As a general rule of thumb most developers in active technology hubs (such as Manchester) tend to receive somewhere in the 10+ emails from recruiters a week. Now I understand that there is a lot of competition for recruiters out there and to be fair, chances are if you are reading this you probably do this anyways, but here are the things I’d suggest you consider before you email a candidate.

Read my profile

Obvious right? It takes about 5 minutes to read a normal sized CV or at least skim through it and it can give you a revealing idea about the candidate. Take me, I did PHP programming about 5 years ago (2011) but I’ve done .NET a lot longer than that and I don’t mention PHP anywhere else on my CV. That should be a hint that I’m not looking for PHP jobs (I’m putting this in bold because I’m dead sure I’ll get a PHP job in my inbox from somebody who “read” the article).

Don’t send several jobs in one mail

Consider a guy emailing you saying they want to apply for a position of junior, mid level developer, senior developer or chief architect depending on what you have available. That raises a lot of flags does it? Well it’s the same way when it gets sent back. I particularly hate the database dump emails that start “I have the following roles available….” shows a lazy approach, please don’t do that, makes you look awful.

If I reply, please do reply back

This is a very old pet hate. When I was looking for a job about a year ago I got several people interested, however when I said that I was working on a Tier 2 Visa a lot of companies do not sponsor visas (it’s getting more common now but still happens a lot). If I took the time to write to you and explain my situation, take 2 minutes to write, Sorry but our client does not sponsor visas, I’ll let you know if that changes, even if we all know that’s probably a lie, but makes us feel like we had closure to the mail.

Don’t just find candidates, find The Candidate

I know this sounds like textbook cliche but I’d be delighted to see the stats (feel free to share and prove me wrong) on how many answers you get on those generic emails sent to 20 or 30 candidates without reading their CV. I trust of a day where we’ll have systems smart enough to choose those good candidates for a job, but right up until then, do a bit of manual filtering and just send it to the real prospects.

Make it personal

This is probably more on the above point, but I’m a lot more inclined to reply an email that is clearly directed to me and you can do that by saying I read this in your blog or I saw your Stack Overflow profile or simple as How’s work in . If I see that you took a tiny bit of time to know about me, I will definitely reply and tell you what I think of the role even if I’m not looking, it’s only polite after all.

Finally

These may all sound a bit vague but they are definitely my pet hates. It’s very difficult to not sound like a ranting old man right now (maybe because that’s what I’m doing) but I think these are the main issues that are giving recruiters the bad name they have. I’ve met some really great ones, so why settle being the spam message sender when you can actually send a quality job to a quality candidate. Think about it.

Empower your lambdas!

If you’ve used generic repositories, you will encounter one particular problem, matching items using dynamic property names isn’t easy. However, using generic repositories has always been a must for me, as it saves me having to write a lot of boilerplate code for saving, updating and so forth. Not long ago, I had a problem, I was fetching entities from a web service and writing them to the database and given that these entities had relationships, I couldn’t retrieve the same entity and save it twice, so I had a problem.
Whenever my code fetched the properties from the service, it had to realize if this entity had been loaded previously and instead of saving it twice, just modified the last updated time and any actual properties that may had changed. To begin with, I had a simple code on a base web service consumer class like this.

var client = ServiceUtils.CreateClient();
var request = ServiceUtils.CreateRequest(requestUrl);
var resp = client.ExecuteAsGet(request, "GET");
var allItems = JsonConvert.DeserializeObject<List<T>>(resp.Content);

This was all very nice and so far, I had a very generic approach (using DeserializeObject<T>). However, I had to check if the item had been previously fetched and one item’s own identity could be determined by one or more properties and my internal Id was meaningless on this context to determine if an object existed previously or not. So, I had to come up with another approach. I created a basic attribute and called it IdentityProperty, whenever a property would define identity of an object externally, I would annotate it with it, so I ended up with entities like this:

public class Person: Entity
{
    [IdentityProperty]
    public string PassportNumber { get; set; } 
    
    [IdentityProperty] 
    public string SocialSecurityNumber { get; set; }

    public string Name {get; set}
}

This would mark all properties that defined identity on the context of web services. So far, so good, my entities now know what defines them on the domain, now I need my generic service consumer to find them on the database so I don’t get duplicates. Now, considering that all my entities fetched from a web service have a Cached and a Timeout property, ideally, I would have something like this:

foreach (var item in allItems)
{
    var calculatedLambda = CalculateLambdaMatchingEntity(item);
    var match = repository.FindBy(calculatedLambda);

    if (match == null) {
        item.LastCached = DateTime.Now;
        item.Timeout = cacheControl;
    }
    else {
        var timeout = match.Cached.AddSeconds(match.Timeout);
        if (DateTime.Now &gt; timeout){
            //Update Entity using reflection
            item.LastCached = DateTime.Now;
    }
}

Well, actually, this is what I have, but the good stuff is on the CalculateLambda method. The idea behind that method is to calculate a lambda to be passed to the FindBy method using the only the properties that contains the IdentityProperty attribute. So, my method looks like this:

private Expression&lt;Func&lt;T, bool&gt;&gt; CalculateLambdaMatchingEntity&lt;T&gt;(T entityToMatch)
{
 var properties = typeof (T).GetProperties();
 var expresionParameter = Expression.Parameter(typeof (T));
 Expression resultingFilter = null;

 foreach (var propertyInfo in properties) {
  var hasIdentityAttribute = propertyInfo.GetCustomAttributes(typeof (IdentityPropertyAttribute), false).Any();

  if (!hasIdentityAttribute)
   continue;

  var propertyCall = Expression.Property(expresionParameter, propertyInfo);

  var currentValue = propertyInfo.GetValue(entityToMatch, new object[] {});
  var comparisonExpression = Expression.Constant(currentValue);

  var component = Expression.Equal(propertyCall, comparisonExpression);

  var finalExpression = Expression.Lambda(component, expresionParameter);

  if (resultingFilter == null)
   resultingFilter = finalExpression;
  else
   resultingFilter = Expression.And(resultingFilter, finalExpression);
 }

    return (Expression&lt;Func&lt;T, bool&gt;&gt;)resultingFilter;
}

Fancy code apart, what this does is just iterate trough the properties of the object and construct a lambda matching the object received as sample, so for our sample class Person, if our service retrieves a person with passport “SAMPLE” and social security number “ANOTHER”, the generated lambda would be the equivalent of issuing a query like

repository.FindBy(person =&gt; person.Passport == "SAMPLE" &amp;&amp; person.SocialSecurityNumber == "ANOTHER")

Performance you say?

If you’ve read the about section on my blog, you’ll know that I work for a company that cares about performance, so once I did this, I knew the next step was bechmarking the process. It doesn’t really matter the fact that it was for a personal project, I had to know that the performance made it a viable idea. So, I ended up doing a set of basic tests benchmarking the total time that the update foreach would take and I came up with these results:

Scenario Matching data Ticks Faster?
Lambda calculation Yes 5570318 Yes
No Lambda calculation Yes 7870450
Lambda calculation No 1780102 No
No Lambda calculation No 1660095

These are actually quite simple to explain, when no data is available, the overhead of calculating a lambda, makes it loose the edge because no items match on the query, however, when there are items matching the power of lambdas shows up, because the compiler doesn’t have to build the expression tree from an expression, but instead, it will receive a previously built tree, so it’s faster to execute. So, back into the initial title, empower your lambdas!
If you have any other point of view on these ideas, feel free to leave a comment even if you are going to prove me wrong with it because I’ve always said that nobody knows everything, so I might be very mistaken here. On the other hand, if this helps, then my job is complete here.

Common method for saving and updating on Entity Framework

This problem has been bugging me for some time now. One of the things that I miss the most from NHibernate when I’m working with EF is the SaveOrUpdate methods. Once you lose that, you realize just how much you loved it in the first place. So, I set out to make my EF repositories to use one of those. My initial approach was rather simple and really close to what you can find here or here, so I basically came out with this:

public T SaveOrUpdate(T item)
{
 if (item == null)
  return default(T);

 var entry = _internalDataContext.Entry(item);

 if (entry.State == EntityState.Detached)
  if (item.Id != null)
   TypeDbSet.Attach(item);
  else 
   TypeDbSet.Add(item);
 
 _internalDataContext.SaveChanges();
 return item;
}

This is a neat idea and it works for most of the cases, with one tiny issue. I was working with an external API and I was caching the objects received on my calls and since these objects had their own keys, I was using those keys on my DB. So, I had a Customer class, but the Id property was set when I was about to insert and since our method uses the convention that if it has an Id, it was already saved, then the repo would just attach it to the change tracker but the object was never saved! Boo! Well, no panic, my repo also has a method called GetOne which receives an Id and returns that object, so I added that into the soup and got this:

public T SaveOrUpdate(T item)
{
 if (item == null)
  return default(T);

 var entry = _internalDataContext.Entry(item);

 if (entry.State == EntityState.Detached)
 {
  if (item.Id != null)
  {
   var exists = GetOne(item.Id) != null;

   if (exists)
    TypeDbSet.Attach(item);
   else
    TypeDbSet.Add(item);
  }
  else 
   TypeDbSet.Add(item);
 }
 
 _internalDataContext.SaveChanges();

 return item;
}

Now, if you think about it, how would you update an object?

  • Check if the object already exists on the DB
  • If it’s there.. update it!
  • If it’s not there.. insert it!

As you can see, Check involves GetOne. Now, if you are thinking that you don’t want an extra DB call, there is always a solution…

public T SaveOrUpdate(T item, bool enforceInsert = false)
{
 if (item == null)
  return default(T);

 var entry = _internalDataContext.Entry(item);

 if (entry.State == EntityState.Detached)
 {
  if (item.Id != null)
  {
   var exists = enforceInsert || GetOne(item.Id) != null;

   if (exists)
    TypeDbSet.Attach(item);
   else
    TypeDbSet.Add(item);
  }
  else 
   TypeDbSet.Add(item);
 }
 
 _internalDataContext.SaveChanges();

 return item;
}

Granted, is not fancy, but gets the job done and doesn’t requires many changes. If you pass the enforceInsert flag, means you are certain that the object you’re saving requires an insert, so it will have an Id, but you know is not there. Just what I was doing!

Do you have any other way of doing this? Do you think this is wrong? Feel free to comment and let me know!

Consuming web services and notifying your app about it on Objective C

Since almost the beginning of my exploits as an iOS developer I’ve been working on several apps consuming web services and one big problem has been notifying different areas of my app that certain event has been updated. My first genius idea was to create my own home brew of notifications using the observer pattern. It wasn’t all that bad, but then a while later I realized that I was reinventing the wheel, so I resorted to the one and only NSNotificationCenter.

Enter NSNotificationCenter

According to Apple on the docs for the notification center, this is the definition:

An NSNotificationCenter object (or simply, notification center) provides a mechanism for broadcasting information within a program. An NSNotificationCenter object is essentially a notification dispatch table.

So, this was my observer! How does it work you say? Let’s get to it! But before, let’s get into context. What I have is a class called ServiceBase which is the base class (duh!) for all classes consuming services. The interface definition for the class looks a bit like this…

 @interface ServiceBase : NSObject<ASIHTTPRequestDelegate>
  - (void) performWebServiceRequest: (NSString*) serviceUrl;
  - (void) triggerNotificationWithName: (NSString*) notificationName andArgument: (NSObject*) notificationArgument;
  - (NSString*) getServiceBaseUrl;
 @end
 

The class has been simplified and the actual class has a few other things that depend more on how I work, but you get the point. However, given the idea of this post, I’m going to concentrate more on the notification side of the class. However, we do need to get some sort of example here going on and to get that done, let’s take a look on the performWebServiceRequest method.

- (void) performWebServiceRequest: (NSString*) serviceUrl
{
    if (!self.queue) {
        self.queue = [[NSOperationQueue alloc] init];
    }
    
    NSURL *url = [NSURL URLWithString: serviceUrl];
    ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:url];
    [request addRequestHeader:@"accept" value:@"text/json"];
 
 [requestion setCompletionBlock: ^{
  //this will keep the self object reference alive until the request is done
  [self requestFinished: request];
 }];
 
    [self.queue addOperation: request];
}
 

Now, we have this simplified method that creates a request, sets the requestFinished method as the completion block and queues up the request. Now, I said I would focus on the notifications, but one thing to consider here:

 [requestion setCompletionBlock: ^{
  //this will keep the self object reference alive until the request is done
  [self requestFinished: request];
 }];
 

Keep in mind, that this sentence will preserve the reference to self until the request is finished, so it’s not autoreleased by ARC, however, the way I use services on my app, each service works as a singleton (or quite close to that) and keeping the reference is not a problem because you are not creating one new instance of each service class every time you make a request. This also solves an issue with ASIHttpRequest loosing the reference to the delegate before the service is complete, however, that’s a story for another day. Now, moving on the the end of the request…

- (void)requestFinished:(ASIHTTPRequest *)request
{
    JSONDecoder* decoder = [[JSONDecoder alloc] init];
    NSData * data = [request responseData];
    NSArray* dictionary = [decoder objectWithData: data];

    for (NSDictionary* element in dictionary) {
  [self triggerNotificationWithName: @"ItemLoaded" andArgument: element];
    }
}
 

When the request is finished, it will only convert the data received, notice that this is a simple scenario, and make a notification that an Item has been loaded using the [triggerNotificationWithName: andArgument] method. Now, into the actual notification method…

- (void) triggerNotificationWithName: (NSString*) notificationName andArgument: (NSObject*) notificationArgument
{
    NSNotificationCenter * notificationCenter = [NSNotificationCenter defaultCenter];
   
 if ( notificationArgument == nil )
 {
  [notificationCenter postNotificationName: notificationName  object: nil];
 }
 else
 {
  NSMutableDictionary * arguments = [[NSMutableDictionary alloc] init];
  [arguments setValue: notificationArgument forKey: @"Value"];
  [notificationCenter postNotificationName: notificationName  object:self userInfo: arguments];
 }
}
 

Now, we only need to subscribe to a notification and retrieve the value which is very simple, take this example inside a UIViewController:

- (void) viewDidLoad
{
 NSNotificationCenter * notificationCenter = [NSNotificationCenter defaultCenter];
 [notificationCenter addObserver: self selector: @selector(authenticationFinished:) name:@"AuthenticationCompleted" object: nil];
}

- (void) itemLoadedNotificationReceived: (NSNotification*) notification
{
 NSDictionary* itemLoaded = [notification.userInfo valueForKey: @"Value"];
    // Do something with the item you just loaded
}
 

In the itemLoadedNotificationReceived method the app will receive a notification when each item is loaded. This may not be the best example, because when you’re loading several items, they normally go into a cache to be loaded from a UITableView afterwards, but this idea should get you going.

Do you use a different approach? Do you normally use it like this? Well, if you have anything at all to say, feel free to leave it in the comments!

The status of Lucene2Objects

After some time without being able to work into it, I’ve managed to put some time into Lucene2Objects again. First thing I did on my last session was to work on separating the attributes from the actual Lucene2Objects project for a very simple reason that was brought to my attention by a fellow user of the library. Currently if you want to annotate your entities on your domain project, you will have to import the Lucene2Objects library into the domain project, thus adding a dependency on the library and on Lucene .NET and on Lucene Contrib project which is used for importing several analyzers and any other dependencies these might bring along. Now, for a domain project, which is supposed to have as less dependencies as possible, this is very heavy duty, hence the need for a separation (of concerns if you will).

The basic idea that I followed on this new update was to separate the project into 2 different libraries, one very light containing the attributes with no dependencies at all and the actual library. Obviously this will make me create another package, which I will do it very soon, but will hopefully allow people to integrate easily with Lucene2Objects .

My next step is working over adding collection support for Lucene2Objects . I have a few ideas on this and I hope a new version should be done soon, but there is nothing worth pushing now. Hopefully, I will manage to put more time into this from now on, so feel free to let me know if there’s something you’d like to see on Lucene2Objects !

Back in business!

I’ve been for sometime without being able to write given that I’ve changed my location to the UK. Now that all arrangements have been take care and I can consider myself settled, I’ll continue with posting a couple of new posts soon. Also, I’m planning on moving my development on Lucene2Objects into the new features that I wanted to get working for version 2.

Thanks to all the folks that reached to me to know about the status of Lucene2Objects!

David

SOLID PHP : Laying Foundations for applications (Part 1)

This will be the first of (hopefully) a few articles explaining (with code and all the neat stuff) some basics on software design for those daring to do this job. PHP has been a language long cursed by many programmers, some say the problem is that is not a compiled language, others say that has weak typing, some talk about performance and there are many many talks about it, you even get to see people that say it sucks! and they just can’t stop wondering why the some big guys use PHP. Personally, I don’t believe in bad languages (with some exceptions, cough.. vb.. cough) but I rather think in bad programmers or to be less agressive in programmers who dislike different ways to do things (now that I think about it, I don’t know which one is worst…)

Anyways, I’ve chosen to do my examples using PHP because there is plenty of literature about the subject using C#, Java or even C++. When we talk about SOLID, the first thing we need to say is that SOLID is a list of 5 principles that encourage several good practices, they’re good, they’re pretty simple, however, they are not the only ones. There are many principles on software design, specially because software design is for one a young science and therefore there is many new stuff to find, and secondly because there are (obviously) more than 5 design principles to do good software, however these 5 are quite important and basic. Now, will these design principles make my code perfect? Well… nope, no way!! there is no such thing as perfect code (hello worlds not included) and mostly because when we write code, we will make mistakes (remember, we are humans…), however, following these principles will prevent a lot of these mistakes, will make your code more readable and will make your system (among other things) scalable.

Today I will start talking about the first principle, which is the Single Responsability Principle. This principle states one elementary truth and its that there should be one (and only one!) reason for a class to change, sadly I can’t make the one bolder than the rest of the principle, but I hope the parenthesis give the correct amount of dramatism. Now, when we say “reason for a class to change”, what’s that? With reason to change we mean reasons that will make you touch (edit) the code again, and using that definition we have to agree that touching (editing) working code has a problem : it introduces bugs…

Now, let’s stop being so chatty and see some code shall we?

  class UserManagement
  {
   public function LogUserIn($userName, $password)
   {
    $host = $dbConf["host"];
    $dbUsr = $dbConf["user"];
    $dbPwd = $dbConf["pass"];
    $link = mysql_connect($host, $dbUsr, $dbPwd);
    $logger = new FileLogger();
    
    if ( !$link ) {
     $logger->Log("Error opening connection to DB");
     return false;
    }
    
    if ( !mysql_select_db($dbConf["db"], $link)){
     $logger->Log("Error selecting DB");
     return false;
    }
    
    //Prevent some injection over here...
    $user = mysql_escape_string($userName);
    $pwd = mysql_escape_string($password);
    
    $query = "SELECT COUNT(*) FROM Users " .
       "WHERE user = '$user' AND Pass = '$pwd'";
    $rsc   = mysql_query($query, $link);
    
    if ( !$rsc ) {
     $logger->Log("Error querying DB");
     return false;
    }
    
    $row = mysql_fetch_array($rsc);
    mysql_close($link);
    
    return count($row) > 0 && $row[0] > 0;
   }
  }
 

Some may say, OK, it’s a perfectly working function, what’s the fuzz all about? Well, what this function does? The function itself does many things:

  • Connects to the database
  • Builds the Log system explicitly and logs errors
  • Handles domain logic code (the whole COUNT to figure out if the user exists..)

So, this function has several reasons to make it change. What happens if we want to change the database? We need to change this code. What happends if we change the Log system? We need to change this code. What happens if we also have to check some more stuff? We also have to touch this code. So, the code has many problems that may make it change. How do we solve this? Using Contracts(a.k.a Interfaces). If we are to connect to the database, we define how the connector has to be, if we are going to perform logging, we define how the Logger has to be and so on. If we do all that, then our class can manage only one thing, say managing the Login of a user. Imagine now we have these two interfaces:

 interface IDatabaseConnection
 {
  public function OpenConnection();
  public function BuildQuery($queryString);
  public function EscapeString($string);
  public function CloseConnection();
 }
 
 interface IDatabaseQuery
 {
  public function GetAsRowCollection();
  public function GetScalarValue();
  //many more...
 }
 
 interface ILogger
 {
  public function Log($message);
 }

With these interfaces, we define contracts. For instance, we’re saying that we want that any class implementing the ILogger interface needs to have a Log function receiving a message argument. Interfaces can even force types using type hinting and therefore providing even better defined contracts, which is always cool.

Once we have defined the contracts we need, then we could rewrite our code to something like this:

 class UserManagement
 {
  private $logger;
  private $dbConnector;
  
  public function __construct(IDatabaseConnection $connector, 
         ILogger $logger){
   $this->logger = $logger;
   $this->dbConnector = $connector;
  }
  
  public function LogUserIn($userName, $password)
  {
   //Prevent some injection over here...
   $user = $this->dbConnector->EscapeString($userName);
   $pwd = $this->dbConnector->EscapeString($password);
   
   $query = "SELECT COUNT(*) FROM Users " .
      "WHERE user = '$user' AND Pass = '$pwd'";
   $command   = $this->dbConnector->BuildQuery($query);
   
   if ( $command != null ) {
    $this->logger->Log("Error building DB command");
    return false;
   }
   
   $this->dbConnector->OpenConnection();
   $count = $command->GetScalarValue();
   $this->dbConnector->CloseConnection();
   
   return $count != 0;
  }
 }

We’ve written code far more readable, with no dependency in anything but domain logic, well, perhaps we’re building SQL code in here, and that’s not good, but to prevent having to do that, we could use Propel or Doctrine and instead of writing plain and cold SQL we could use an ORM framework to do the dirty work for us. I’m just putting a simple example here using traditional database connection, but if you have an elaborate application, then consider seriously using Propel or Doctrine.

Now we would need to implement the ILogger interface and the IDatabaseConnection interface to be able to perform the operations we need, however, if we need to change the way we connect to the database, then we’ll change the implementation (or add a new one!) of our IDatabaseConnection and that’s just about it!

However, do keep in mind that the issue with the Single Responsability Principle is that it will help you with dependencies, but it will make your design grow substantially. We started with a class and ended with 3 interfaces and also their implementations (3 more classes) and finally the same (but reduced) class. Bottom line is : these principles are to design a propper application, but if you’re making your own small tool, like some scrapper or a small reader for rss news, SOLID principles are not the thing you should be considering, they apply when you are facing an application that may grow.

Installing Smarty

A Quick & Dirty version…

Smarty is one of the most important template engines in the PHP development world. It provides a rather easy and “clean” usage and setting it up is not hard, dispite of some quick-install and Windows tutorials I saw in my early documentation seeking moments.

Before anything, I’ll assume that you are not looking for a tutorial on Smarty, this will cover only how to get it running. Well, here is a simple list of how to set Smarty up in your development environment.

  1. Download Smarty It’s free, lightweight and fast. Also, it’s the only(?) real template engine available for PHP.
  2. To be able to configure Smarty, you’ll have to add the downloaded Smarty project to your web project (I assume you have it in a folder), and load it, nothing fancy!

    Take for instance, the case where I have Smarty in my server in libsmarty. If I use this line in my entry point file(usually, my index.php file):

       define("PATH_BASE", dirname(__FILE__));
       define("DS", DIRECTORY_SEPARATOR);
      

    Then, I’ll be able to load Smarty with a really simple call:

       require_once( PATH_BASE . DS . "lib" . DS . "smarty" . DS . "Smarty.class.php" );
       $smartyObject = new Smarty();
      

    On a side note, I received a comment some time ago about the DIRECTORY_SEPARATOR constant definition not working. I’ve looked upon the problem and it seems that the constant is part of the PHP core for Directories.

  3. Now that Smarty has been set up, we need to point out the directories where to look for and store generated files. This can be achieved with 3 simple instructions:

    I assume that PATH_TEMPLATES is the path to your templates folder. That’s it! With a few lines of code, you have Smarty up and running! All you have to do now is just assign variables and load the template. Like this:

       $smartyObject->assign("message", "Hello world!");
       $smartyObject->display("hello.tpl");