Interface better practices

Today’s post is inspired from a question I answered on stackoverflow a while ago.  I’m bringing you some important point to consider when using interface.

First, one thing worth mentioning is that interface are not necessarily reference counted.  For example, TComponent implements IInterface, but if we look at it’s AddRef and Release methods, they both returns -1 indicating no reference counting occurs. TInterfacedObject, on the other hand, does take care of the reference counting. This is an important distinction because it affects how we should use interface.

Reference-counted interface

When using those, it’s usually preferable to avoid keeping a reference to the implementing object as we have no control on the lifetime of the object, the object will get destroyed at the same time it’s last interface reference is cleared.

If we absolutely need to keep the object reference around, we have 2 options.

  1. Keep an interface variable alongside the object variable to keep the object alive.

    [Delphi]
    TMyClass = class(TInterfaceObject);
    […]
    TForm1 = class(TForm)
    FObject : TMyClass;
    FObjectIntf : IUnknown;
    public
    […]
    //Creating the object
    FObject := TMyClass.Create;
    FObjectIntf := FObject;
    //Destroying the object
    FObject := nil; //Do NOT free the object
    FObjectIntf := nil; //FObject’s Destructor is called here.
    [/delphi]

  2. Use a FreeNotification mechanism to set the object’s variable back to nil when it is freed. This is built-in TComponent. Even though our class isn’t derived from TComponent, we can still leverage the functionality like this :

    [Delphi]
    TMyClass = class(TInterfaceObject)
    FNotifier : TComponent
    […]
    TForm1 = class(TForm)
    private
    FObject : TMyClass;
    protected
    procedure Notification(AComponent: TComponent; Operation: TOperation);override;

    […]
    procedure TForm1.Notification(AComponent: TComponent; Operation: TOperation);
    begin
    inherited;
    if FObject.FNotifier = FComponent then
    FObject := nil;
    end;
    //Creating the object
    FObject := TMyClass.Create;
    Form1.FreeNotification(FObject.FNotifier);
    //Destroying the object
    FObject.Free; //Here, FObject is assigned.
    Assert(FObject = nil); //Here, FObject is not assigned anymore, it was set to “nil” in TForm1.Notification.
    [/delphi]

When working with reference counted interface, it is worth nothing that, if 2 interfaces reference each others, they are never going to be freed. This is called “Circular Reference”. This is also seen with Automatic Reference Counting (ARC). The best method to break circular reference is a subject I still have to research (Especially in regard to the new [weak]  keyword) so I won’t make any suggestion here, but I thought it was worth mentioning.

Non reference-counted interface

Here, we need to keep an object reference around to be able to free our object’s memory as it won’t be freed through reference counting and, unless it implements IDisposable or a similar interface, we can’t free it from the interface reference.  Thus, we have the opposite problem.  We are unable to determine if there are references to our object left.

That being said, the problem isn’t specific to interface. The same problem would occur if we would pass the reference to our object around as a simple object.  Since this post is more about interfaces, I won’t delve into it here (though I might write about it some other time). But, one of the ways to make sure no references are left to our object is (*drum roll*), FreeNotification.

In which circumstance would we want to use interface without the reference counting?  Interfaces are tools and a tool’s usefulness is mostly limited by one’s imagination. But one I can name on the top of my head is using it as a “method binding” technique or pseudo “Duck Typing“. One such example can be seen in one of my previous post here.  Interface is also pretty much how anonymous methods are implemented “behind the scene”. I’m sure you guys can figure out clever usage for them!

One manager to rule them all

I guess I feel I’m not challenged enough at my job because sometime I deliver way more than I was asked (which got me in trouble once or twice).

I was once asked to develop a simple cache manager. We were downloading documents from a server, and we wanted to have a local cache so we would only download them at most once a day.

The specifications were something like this :

  TOurDocumentCacheManager = class
  public
[...]
    procedure WriteToCache(const DocumentId : string; Document : TOurSpecificDocumentClassInstance);
    function ReadFromCache(const DocumentId : string; Document : TOurSpecificDocumentClassInstance) : Boolean; 
[...] 
  end; 

I felt this was WAY too specific for a job that is pretty general.  So I altered the specifications like this :

  ICacheable = interface(IStreamPersist)
    function GetUniqueId : String;
  end;

  TCacheManager = class
  public
[...]
    procedure WriteToCache(Document : ICacheable);
    function ReadFromCache(Document : ICacheable) : Boolean; 
[...] 
  end; 

So, instead of having a cache manager that can manage a single type of document, we have a cache manager that can manage them all.

Believe it or not, that still landed me a few complaints, though all easily refuted. One of which might even come to your mind…  “What if we want to cache an object from a 3rd party? We can’t magically add ICacheable to the object!”.  And this is a valid point, until we realize writing a ICacheable wrapper around an object is simpler than writing a new cache manager.

So… I guess the point I was trying to make here is how easy it is to limit the potential of the code we write. Sometime, it doesn’t involve much (if any) extra work to make our code more reusable, and it’s pretty much always worth it.

Shlemiel the painter strikes again

Once upon a time, I was introduced to Shlemiel.  Who is Shlemiel? For those too lazy to look up the link, he’s the guy from this joke :

Shlemiel gets a job as a street painter, painting the dotted lines down the middle of the road. On the first day he takes a can of paint out to the road and finishes 300 yards of the road. “That’s pretty good!” says his boss, “you’re a fast worker!” and pays him a kopeck.
The next day Shlemiel only gets 150 yards done. “Well, that’s not nearly as good as yesterday, but you’re still a fast worker. 150 yards is respectable,” and pays him a kopeck.
The next day Shlemiel paints 30 yards of the road. “Only 30!” shouts his boss. “That’s unacceptable! On the first day you did ten times that much work! What’s going on?”
“I can’t help it,” says Shlemiel. “Every day I get farther and farther away from the paint can!”

We can all agree Shlemiel wasn’t working very efficiently! Spotting inefficiency in tasks in the physical world is pretty easy because it’s very concrete. Doing the same in the programming world is a lot harder since we don’t usually observe the execution of our code, we usually only observe the end result. It’s very easy to copy Shlemiel’s without even realizing it.

Us, Delphi programmers, we can find example of Shelmiel’s work pretty close to home. There’s a few of those in Delphi’s VCL/RTL. The one I’m going to introduce today is StringReplace.

StringReplace is notoriously slow for large string, and after profiling the function to figure out why it is so slow, I’m pretty amazed Embarcadero didn’t bother to fix its implementation as it is very simple to fix.

The source of the problem lies here :

[Delphi]

Offset := AnsiPos(Patt, SearchStr);
if Offset = 0 then
begin
Result := Result + NewStr;
Break;
end;
Result := Result + Copy(NewStr, 1, Offset – 1) + NewPattern;
NewStr := Copy(NewStr, Offset + Length(OldPattern), MaxInt); //This line is a problem
if not (rfReplaceAll in Flags) then
begin
Result := Result + NewStr;
Break;
end;
SearchStr := Copy(SearchStr, Offset + Length(Patt), MaxInt); //This line is a problem
[/Delphi]

The 2 flagged line take together more than 90% of execution time when called with a large string as input. Now, what happens here is this :

Shlemiel realized how inefficient his methods were. So the next day, shlemiel decided to transport to the whole road to his paint can. That way, he would need to walk a lot less to paint the road!

Everytime the pattern is matched, they make a whole copy of NewStr and a whole copy of SearchStr. This makes the algorithm run in O(n2) time. Fixing this is relatively simple:  Instead of taking the road to the paint can, take the paint can to the road! … or in other word, scanning the string in place.

It is a relatively simple change to make, yet, it takes the function from O(n2) to O(n). A simple test, loading Winapi.Windows.pas as input and replacing “A” with “B” would take  about 2 minutes to run with the original code, but less than 10 msec when scanning in-place.

If you are curious about the new implementation for StringReplace, I made my implementation available in a unit that automatically replace the implementation from System.StrUtils. You will need :

  • ab.Patch.System.SysUtils.pas
  • ab.MemUtils.pas

If there’s another function that’s you guys would like me to take a look at, let me know in the comments.

Evolving from the primitive world

One of the novelty of the recent versions of Delphi is the built-in record helper for the primitive types (string, Integer, etc.). Record helper are essentially class helper for record (and apparently primitive types). The new class helper allows code syntax like

[Code language=”delphi”]
ShowMessage(1.ToString);
[/code]

to compile and work the same as the good old

[Code language=”delphi”]
ShowMessage(IntToStr(1));
[/code]

One of the drawback of record helper is, like class helper, there can only be 1 “active” at any given point in the source code. Any new helper we declare will hide or be hidden by (depending on scope resolution) Delphi’s helper. Also, record helpers are pretty good for generic operations on primitives, but what about more specific operations? It is not ideal to have something like this compile:

[Code language=”delphi”]
var
sFirstName : string;
begin
[…]
sFirstName.GetAreaCode;
[/code]

This problem is especially true for string, as we quite often use strings formatted in specific ways to give them special meanings.

My favorite approach when I have to work with such string is to create what I like to call “Smart Primitive”.

What I like about this concept is that, for one, the validity of the format is done at the moment of initialization. That means that any routine that use the record can assume the value is in an accepted format. [1]

So, what is a smart primitive?

  1. Smart primitives are opaque records. Only their methods should use their internal fields. Very often, they will only have a single internal field.
  2. They implement at least 1 implicit conversion to and from a primitive type. That is how the internal field of the record should be initialized.

Now, lets get a little more concrete.  Earlier, I gave the GetAreaCode example, so lets see how a PhoneNumber basic smart primitive would look like.

In North America, phone numbers always have 10 digits, the area code being the first 3 [2]. Now, lets say we want to allow users to enter any of those formats :

  1. (999)999-9999
  2. 999-999-9999
  3. (999)-999-9999
  4. 9999999999

Here’s a basic implementation :

[Code language=”delphi”]
TabPhoneNum = record
private
FPhoneNum : INT64;
FOriginalValue : string;
public
class operator Implicit(S : String) : TabPhoneNum;
class operator Implicit(APhoneNum : TabPhoneNum) : String;
[…]
function GetAreaCode : Integer;
end;

class operator TabPhoneNum.Implicit(S: String): TabPhoneNum;
var sNormalized : String;
begin
sNormalized := NormalizePhoneNum(S); //Remove “(“,”)”,”-” and ” ”
FPhoneNum := StrToInt64(sNormalized ); //If the normalized string is not a valid integer, we want to raise
if GetDigitsCount(FPhoneNum) <> 10 then
Raise Exception.Create(S + ‘ is not a valid phone number’);
FOriginalValue := S;
end;

function TabPhoneNum.GetAreaCode : Integer;
begin
//We don’t need to validate the format of FOriginalValue, it’s been done already
//We don’t need to validate the number of digits, it’s been done already
Result := (FPhoneNum div 10000000); // we don’t need to “mod 100” because we validated it is 10 digits
end;

function TForm1.GetAreaCode(const APhoneNum : TabPhoneNum) : Integer;
begin
Result := APhoneNum.GetAreaCode;
end;

procedure TForm1.Button1Click(Sender : TObject);
begin
ShowMessage(GetAreaCode(Edit1.Text).ToString);
//here, if Edit1.Text is not in a valid format, it will raise an exception before calling Tform1.GetAreaCode.
end;
[/code]

In the implementation of GetAreaCode, we see one of the main advantage of smart primitives. We don’t have to deal with all the different string format and we don’t have to validate the number of digits in FPhoneNum because it was already done when the record was first initialized.

Of course, we could simply implement GetAreaCode like this :

[Delphi]
function GetAreaCode(const S : string) : Integer;
begin
Result := Copy(S, 2, 3).ToInteger;
end;
[/Delphi]

and go based on the design by contract philosophy, and state that the input string needs to start with “(” followed immediately by the area code, etc. But that puts the burden of the contract on the caller. For a single method, it might make sense, but if you have a collection of methods having the exact same preconditions in the contract, it would make more sense laying out the contract’s preconditions as being the smart primitive’s contract instead, and then let its methods work with an already enforced contract.

This also has the advantage of “scoping” the methods where they belong. One too many time I have opened a unit filled with routine taking a single string as a parameter, none of them accepting the same “kind” of string.

So, how do you like this? Already doing it? Like it? Hate it?  Let me know what you think!

1.I say “Accepted” and not “Valid” on purpose. A smart primitive can allow invalid values.
2.Except in Mexico where it varies. For the sake of simplicity, I’ve excluded this fact.

Culture shocks

Like many discipline, programming is a team one. Yes, you can occasionally encounter a programmer earning a living while programming in his basement all alone. But since a lone programmer can only achieve so much on his own, most sizable projects require many programmers to cooperate.

Working with others always has it’s own set of challenges. Personality, culture, religion and many more can come as obstacles in teamwork. As programmers, even thought process can causes conflicts.

Not so long ago, I encountered a function written by a coworker that baffled me. It was only 7 lines of code, but it took me a moment to understand what it really did. I eventually went to ask the original programmer his explanation of his implementation decisions (Added as code comment, the original code had no comments).

function GetSurObj(ASurObj : TSurObj; AObj : TObject) : TSurObj;
begin
  //First, I initialize my Result
  Result := nil; 
  //Then I validate the input parameters
  if (ASurObj = nil) and (AObj = nil) then
    EXIT;
  //if the ASurObj = nil but AObj isn't, try to find a proper ASurObj 
  if (ASurObj = nil) and (AObj <> nil) then
    Result := FindSurObj(AObj)
  else
    // if we get here, it means ASurObj <> nil
    Result := ASurObj;
end;

When I finally understood the actual use of the function, I couldn’t believe how complicated it ended up being. To me, the following makes a lot more sense.

function GetSurObj(ASurObj : TSurObj; AObj : TObject) : TSurObj;
begin
  Result := ASurObj; 
  if (Result = nil) and (AObj <> nil)  then 
    Result := FindSurObj(AObj)
end;

Both functions have the merit of returning exactly the same result and making the exact same validations. But it seems to me it’s a lot easier to understand the purpose of the function looking at my version where the most significant line of code is the 1st one, compared to my coworker’s version where the most significant line is the last one.

I can see the merit of initializing result and validating input parameters, but I feel that, in this situation, it just adds way too much noise to a very simple function.

What do you think? Leave a comment about which implementation you prefer and why.

Allowing breaking changes to break is good

I remember a discussion with a coworker some time ago… He came and proudly told me he always used “.AsString”, “.AsInteger”, etc when dealing with TField’s descendant instead of using “.Value”. His rational was that, if the field type changes, using Value would would not compile while using AsString would.

Take the following code :

  
  FField.AsString := 'Hello World';
  FField.Value    := 'Hello World';

If FField is a TWideString field, both line means essentially the same. On the other hand, If FField is a TIntegerField, the 2nd line won’t compile.

While I need to admit my coworker was right, his approach was certainly better than mine at making sure a given line of code will keep compiling in the future, he was also wrong in believing this was a good thing.

In this particular example, even though FField.AsString would keep compiling if FField become a TIntegerField, it would just raise an exception at runtime trying to convert ‘Hello World’ to an Integer.

Personally, I tend to favor coding approach that will allow the compiler to catch problems at compile time. It’s one of the perks of working with a strongly typed language.

So, what do *I* do? I always use Field.Value for typed TField. When I’m working with untyped TField variable, then I never use Field.Value.

  
  var
    FTypedField   : TWideStringField;
    FUntypedField : TField;
    sSomeVariable : String;
[...]
  FTypedField.Value       := sSomeVariable ; 
  FUntypedField.AsString  := sSomeVariable ;

In this situation, if FTypedField becomes a TIntegerField, the code won’t compile. But why use AsString with an TField?  In this case, the advantage lies on the right side of the operation : If sSomeVariable’s type is changed, that won’t compile anymore.

About namespace, scopes and collisions

I was recently reading Raymond Chen’s blog The curse of the redefinition of the symbol HLOG. Although not exactly the same issue, it reminded me of a situation a coworker requested help on a long time ago.

The root of the issue was pretty simple, my colleague had the error E2037 Declaration of ‘SomeProc’ differs from previous declaration.

Now,  most of the time, this is a pretty trivial error to correct. But I was initially baffled by the problem. Here’s what the declaration looked like :

type
  TSomeClass = class
[...]
    Procedure SomeProc(ABitmap : TBitmap);
[...]
procedure TSomeClass.SomeProc(ABitmap : TBitmap);
begin
end;

Now, I don’t know about you, but the declarations look the same to me. It took me a few minutes to realize the source of the problem. Here’s what the fix looked like :

type
  TSomeClass = class
[...]
    Procedure SomeProc(ABitmap : TBitmap);
[...]
procedure TSomeClass.SomeProc(ABitmap : Graphics.TBitmap);
begin
end;

Now, why was it required to use the full scope name in the function implementation? It so happens that, in the unit where the class was declared, the interface section had a use on Graphics.pas and the implementation had a use on Windows.pas. For this reason, TBitmap in the interface section was being interpreted as Graphics.TBitmap while TBitmap in the implementation section was interpreted as Windows.TBitmap.

Implicit scoping saves a lot of typing, especially now that Delphi uses doted namespace. I wouldn’t want to be required to type
Generics.Collections.TList<System.Classes.TAction> everytime I want to declare a list of TAction. That being said, like I mentioned recently, implicit behaviors can really reserve you surprises when you are not aware of them.

One of the caveat of high level programming

After I decided to study computer science, but before I actually started classes, I was scared. I was scared computer science (programming mostly) would be way too hard for me.

Part of that scare came from my total lack of knowledge on the subject. All I knew about EXE files was what I learned opening those in a hex editor.  I started thinking programming was about writing “stuff” in hexadecimal, and if aligned properly inside a file, magic happened! Thankfully, it is not the case. If only I had known what a compiler was back then…

Compiler are powerful tools that allows to transform instructions in plain text into machine code. The use of compilers has many advantages. They can hide the details of the hardware. A new CPU comes around with new, better performing instructions for a specific task? As soon as an update to the compiler is available you can simply update the compiler and rebuild your project without altering a single line of code and you can take advantage of the new instructions. There’s more than 1 sequence of instructions that can do what you ask on the hardware level?  Well, the compiler knows which one (or should know) performs it best.

But compilers can create quite a few “problems”. One of the problems that arise from compiler technologies is that pretty often, programmers are not 100% aware of what the machine does “behind the scene” given some lines of codes.  The Delphi language is especially rich in “implicit” stuff going around. Notably, string/array management.

When I started to work with the Exception class, one of the thing I wondered was, what was the point of its constructor CreateFmt. I was asking myself “What’s the difference between those 2 lines:”

  raise Exception.Create(Format(SSomeConstant, [1]));
  raise Exception.CreateFmt(SSomeConstant, [1]);

It took me many years before I stumbled upon information that allowed me to extrapolate the reason for the existence of CreateFmt. Granted, the difference between the 2 isn’t meaningful for most(99%) intent and purpose.

The reason why the 2 exists is partly because of the compiler doing a lot more than we know about. Lets take an example

procedure CheckValue(AValue : Integer);
begin
  if AValue > 10 then
    raise Exception.create(Format(SValueTooHigh, [AValue]));
end;

what you are really doing is

procedure CheckValue(AValue : Integer);
var ImplicitStringVariable : string;
begin
  ImplicitStringVariable := '';
  try
    if AValue > 10 then
    begin
      ImplicitStringVariable := Format(SValueTooHigh, [AValue])
      raise Exception.create(ImplicitStringVariable);
    end;
  finally
    DecRefCount(ImplicitStringVariable);
  end;
end;

while using Exception.CreateFmt really do only

procedure CheckValue(AValue : Integer);
begin
  if AValue > 10 then
    raise Exception.createFmt(SValueTooHigh, [AValue]);
end;

Of the few tests I’ve made, Using Exception.CreateFmt instead of Exception.Create(format) made the function about 8 times faster when no exceptions where raised. In situation where performance matters, it’s quite a difference. (ok, in situation where performance matters, exceptions wouldn’t be used 😉 )

Moral of the story, the higher level the language, the more things happening implicitly in the background. And those things doesn’t always make sense.  This video express it better than I could ever do.

Floating or drowning

I never went to the university. After high school, I didn’t expect to last much longer in school. Thankfully, or so I thought back then, there is a lot of different technical degree/training available on the market.  So, when I had to choose between 6 more school years through university or 3 years (in Québec’ s wonderful CÉGEP), the choice was pretty easy.

Now, I can’t say the formation I had was bad. But looking back on it, I feel like a lot of critical information was missing. I did quite a few blunders because of information I didn’t get. I learned from my mistakes… But learning before making mistakes is even better.

One of the thing I wish they had explained in my classes is how floating points data types work.  I had so many WTF moments working with floating points variable before I finally understood what was going wrong, it’s not even funny. If you don’t know anything funny going on with floating points, get ready to have your mind blown.

So… Lets take this simple routine:

const
  MY_VALUE = 0.7;

procedure TForm1.Button1Click(Sender: TObject);
var dVal : Double;
begin
  dVal := MY_VALUE;

  if dVal <> MY_VALUE then
    ShowMessage('OMG! My CPU fails basic arithmetics!');
end;

So now, I’m asking you: Do you think the message will pop-up? The short answer is YES! (The long answer is : It depends).

The first thing that needs to be understood is that most floating points value cannot be expressed precisely in binary. Some data type like BCD(binary coded decimal) works around that problem, but BCD’s performance isn’t as good as other datatypes and thus, not usually used for most purpose. So, once encoded, what is 0.7 encoded as? I’ll use the following figure for illustration purpose :

64bits : 0.70000000003
80bits : 0.69999999999

The 2nd part of the problem come from the fact that floating literals in Delphi are of type Extended(80 bits). So, here’s what is happening with our code.  dVal is a double (only have 64 bits precision). When we compare it with MY_VALUE,  Delphi consider it like a floating point literal (thus 80 bits).  Since it can’t compare apples and oranges , it makes some juice, or in this case, upscale dVal to 80 bits precision.

Now, why wouldn’t dVal become 0.69999999999 once upscaled to 80 bits? Because it doesn’t contains 0.7, but really 0.70000000003. To work around those problems, the Math unit contains several CompareValue functions that are designed to test floating point values.

Now, for the long answer. Some of you might have tested the code above and didn’t get the popup message, while some others did. Why is that? New compiler, new rules.  One rule that did change is that under the 64 bits compiler of Delphi, the Extended type is now 64 bits.  So in WIN64, all the floating points stays in 64 bits format and doesn’t suffer from rounding/conversion errors.

From what I read, Extended became an alias to Double because the compiler is using SSE2 instructions for floating point operations instead of the x87 instructions it uses in Win32.

As for the other platforms (iOS, Android), I’m usable to test at this time.

Still, while the specifics may change, the problems linked to floating points conversion remains the same no matter which platform/language you use. If you want to dig further on the subject, you can read What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Our first idea is rarely the best

This holds true for many aspect of life. But in programming, it’s nearly a constant. Even for very simple task,  it is hard to come with the very best solution right away.

Lets take, for example, determining if a number is prime or not. The definition of a prime number is (according to wikipedia):

A prime number (or a prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself.

Based on the definition, the first implementation that comes to mind for most people (including myself) is the following.

function IsPrime(AValue : Integer) : boolean;
var I : Integer;
begin
  Result := True;
  for I := 2 to AValue - 1 do
  begin
    if AValue mod I = 0 then
      EXIT(False);
  end;
end;

This is a valid implementation. It has quite a few flaws, for example, all numbers lower than 2 will be listed as primes. Lets assume the contract of the function states AValue needs to be a positive integer larger than 1, we can now say the function gives the right answer all the time. But, what is even better than getting the right answer? Getting it FAST!  And how do we do this? Well, one of the lesson I learned from Michael Abrash’s book Zen of Code Optimisation, –“Know your data!”.

Here, our data is numbers and number properties. The property we need to observe here is the symmetrical nature of number’s divisors. Lets start with a concrete number: 30.  Its divisors are :

30 – 1, 2, 3, 5, 6, 10, 15, 30

What we can observe here is : if wemultiply the Nth term from the beginning of the list and multiply it by the Nth term from the end of the list, we always get 30 . What lies straight in the middle of the list? The square root of the number we are observing.

30 – 1, 2, 3, 5,(√30), 6, 10, 15, 30

The conclusion here is : For any divisors of X greater than X’s square root, there is also a lower divisor. In other word, if we don’t really need to divide 30 by 15, since dividing 30 by 2 does the same test. With that new knowledge in mind, we can now improve our IsPrime function.

function IsPrime(AValue : Integer) : boolean;
var I : Integer;
begin
  Result := True;
  for I := 2 to Trunc(Sqrt(AValue)) do
  begin
    if AValue mod I = 0 then
      EXIT(False);
  end;
end;

So, we went from AValue – 2 divisions all the way down to √AValue – 1 divisions. There certainly isn’t any other optimization left, right? As a pure standalone function, that’s pretty much as far as we can get. But it is still possible to go further than this, depending how much memory we want to commit to the task.

With our latest revision of IsPrime, we start by dividing by 2, then by 3, then by 4… Wait!  If 2 doesn’t divide our number, 4 certainly won’t… Why do we test for 4? The main reason is, we don’t “know” 4 isn’t a prime number. If we knew it, we would know we already tested 4 indirectly through one of it’s factor(in this case, 2). Computing this information every time we call the function would be pretty process intensive.  But… keeping a list of all the primes would take lot of memory and would also take a while to compute, no?

Actually, no! Remember, we only need to have a list of all the primes up to the square root of the number we want to test. That means that, to test the largest possible 32 bits unsigned integer(4294967295), we only need the list of all primes number from 2 to 65536. Spoiler alert!  There is only 6542 primes in that range. Those can be computed in 10s of msecs on modern day computer and only take 13 kB of memory if stored as words. (We can go as low as 8 kB if we store them as an array of bits, but it would be a lot less efficient to work with). Now, what does our code looks like?

(Full code example with comments will be available for download soon)

type
TPrimes = class abstract
strict private class var
 FPrimeList : TList<Integer>;
 FHighestValueTested : Integer;
 class procedure LoadPrimesUpTo(AValue : Integer);
strict private
 class constructor Create;
 class destructor Destroy;
public
 class function IsPrime(AValue : Integer) : Boolean;
end ;

class function TPrimes.IsPrime(AValue: Integer): Boolean;
var I, iValueRoot, idx : Integer;
begin
 if AValue <= FHighestValueTested then
   Result := FPrimeList.BinarySearch(AValue, idx)
 else
 begin
   Result := True;
   iValueRoot := Trunc(Sqrt(AValue));
   LoadPrimesUpTo(iValueRoot);
   if not FPrimeList.BinarySearch(iValueRoot, idx) then
     Dec(idx);
   for I := 0 to Min(FPrimeList.Count - 1, idx) do
   begin
     if AValue mod FPrimeList.List[I] = 0 then
       EXIT(False);
   end;
 end;
end;

class procedure TPrimes.LoadPrimesUpTo(AValue: Integer);
var
 I: Integer;
begin
 for I := FHighestValueTested + 1 to AValue do
 begin
   if IsPrime(I) then
     FPrimeList.Add(I);
   FHighestValueTested := I;
 end;
end;

In this example, I dynamically grow the list as needed. So if we don’t need to test very large numbers, we use less memory. So, to test if 4294967295 is prime, we went from up to 4294967293 divisions down to a maximum of 6542 division. I think we did good job on this!
(On a second thought, it can be divided by 5, so not that much of a gain for that specific number! 😉 )