All posts by Ken Bourassa

Culture shocks

Like many discipline, programming is a team one. Yes, you can occasionally encounter a programmer earning a living while programming in his basement all alone. But since a lone programmer can only achieve so much on his own, most sizable projects require many programmers to cooperate.

Working with others always has it’s own set of challenges. Personality, culture, religion and many more can come as obstacles in teamwork. As programmers, even thought process can causes conflicts.

Not so long ago, I encountered a function written by a coworker that baffled me. It was only 7 lines of code, but it took me a moment to understand what it really did. I eventually went to ask the original programmer his explanation of his implementation decisions (Added as code comment, the original code had no comments).

function GetSurObj(ASurObj : TSurObj; AObj : TObject) : TSurObj;
begin
  //First, I initialize my Result
  Result := nil; 
  //Then I validate the input parameters
  if (ASurObj = nil) and (AObj = nil) then
    EXIT;
  //if the ASurObj = nil but AObj isn't, try to find a proper ASurObj 
  if (ASurObj = nil) and (AObj <> nil) then
    Result := FindSurObj(AObj)
  else
    // if we get here, it means ASurObj <> nil
    Result := ASurObj;
end;

When I finally understood the actual use of the function, I couldn’t believe how complicated it ended up being. To me, the following makes a lot more sense.

function GetSurObj(ASurObj : TSurObj; AObj : TObject) : TSurObj;
begin
  Result := ASurObj; 
  if (Result = nil) and (AObj <> nil)  then 
    Result := FindSurObj(AObj)
end;

Both functions have the merit of returning exactly the same result and making the exact same validations. But it seems to me it’s a lot easier to understand the purpose of the function looking at my version where the most significant line of code is the 1st one, compared to my coworker’s version where the most significant line is the last one.

I can see the merit of initializing result and validating input parameters, but I feel that, in this situation, it just adds way too much noise to a very simple function.

What do you think? Leave a comment about which implementation you prefer and why.

About namespace, scopes and collisions

I was recently reading Raymond Chen’s blog The curse of the redefinition of the symbol HLOG. Although not exactly the same issue, it reminded me of a situation a coworker requested help on a long time ago.

The root of the issue was pretty simple, my colleague had the error E2037 Declaration of ‘SomeProc’ differs from previous declaration.

Now,  most of the time, this is a pretty trivial error to correct. But I was initially baffled by the problem. Here’s what the declaration looked like :

type
  TSomeClass = class
[...]
    Procedure SomeProc(ABitmap : TBitmap);
[...]
procedure TSomeClass.SomeProc(ABitmap : TBitmap);
begin
end;

Now, I don’t know about you, but the declarations look the same to me. It took me a few minutes to realize the source of the problem. Here’s what the fix looked like :

type
  TSomeClass = class
[...]
    Procedure SomeProc(ABitmap : TBitmap);
[...]
procedure TSomeClass.SomeProc(ABitmap : Graphics.TBitmap);
begin
end;

Now, why was it required to use the full scope name in the function implementation? It so happens that, in the unit where the class was declared, the interface section had a use on Graphics.pas and the implementation had a use on Windows.pas. For this reason, TBitmap in the interface section was being interpreted as Graphics.TBitmap while TBitmap in the implementation section was interpreted as Windows.TBitmap.

Implicit scoping saves a lot of typing, especially now that Delphi uses doted namespace. I wouldn’t want to be required to type
Generics.Collections.TList<System.Classes.TAction> everytime I want to declare a list of TAction. That being said, like I mentioned recently, implicit behaviors can really reserve you surprises when you are not aware of them.

One of the caveat of high level programming

After I decided to study computer science, but before I actually started classes, I was scared. I was scared computer science (programming mostly) would be way too hard for me.

Part of that scare came from my total lack of knowledge on the subject. All I knew about EXE files was what I learned opening those in a hex editor.  I started thinking programming was about writing “stuff” in hexadecimal, and if aligned properly inside a file, magic happened! Thankfully, it is not the case. If only I had known what a compiler was back then…

Compiler are powerful tools that allows to transform instructions in plain text into machine code. The use of compilers has many advantages. They can hide the details of the hardware. A new CPU comes around with new, better performing instructions for a specific task? As soon as an update to the compiler is available you can simply update the compiler and rebuild your project without altering a single line of code and you can take advantage of the new instructions. There’s more than 1 sequence of instructions that can do what you ask on the hardware level?  Well, the compiler knows which one (or should know) performs it best.

But compilers can create quite a few “problems”. One of the problems that arise from compiler technologies is that pretty often, programmers are not 100% aware of what the machine does “behind the scene” given some lines of codes.  The Delphi language is especially rich in “implicit” stuff going around. Notably, string/array management.

When I started to work with the Exception class, one of the thing I wondered was, what was the point of its constructor CreateFmt. I was asking myself “What’s the difference between those 2 lines:”

  raise Exception.Create(Format(SSomeConstant, [1]));
  raise Exception.CreateFmt(SSomeConstant, [1]);

It took me many years before I stumbled upon information that allowed me to extrapolate the reason for the existence of CreateFmt. Granted, the difference between the 2 isn’t meaningful for most(99%) intent and purpose.

The reason why the 2 exists is partly because of the compiler doing a lot more than we know about. Lets take an example

procedure CheckValue(AValue : Integer);
begin
  if AValue > 10 then
    raise Exception.create(Format(SValueTooHigh, [AValue]));
end;

what you are really doing is

procedure CheckValue(AValue : Integer);
var ImplicitStringVariable : string;
begin
  ImplicitStringVariable := '';
  try
    if AValue > 10 then
    begin
      ImplicitStringVariable := Format(SValueTooHigh, [AValue])
      raise Exception.create(ImplicitStringVariable);
    end;
  finally
    DecRefCount(ImplicitStringVariable);
  end;
end;

while using Exception.CreateFmt really do only

procedure CheckValue(AValue : Integer);
begin
  if AValue > 10 then
    raise Exception.createFmt(SValueTooHigh, [AValue]);
end;

Of the few tests I’ve made, Using Exception.CreateFmt instead of Exception.Create(format) made the function about 8 times faster when no exceptions where raised. In situation where performance matters, it’s quite a difference. (ok, in situation where performance matters, exceptions wouldn’t be used 😉 )

Moral of the story, the higher level the language, the more things happening implicitly in the background. And those things doesn’t always make sense.  This video express it better than I could ever do.

Floating or drowning

I never went to the university. After high school, I didn’t expect to last much longer in school. Thankfully, or so I thought back then, there is a lot of different technical degree/training available on the market.  So, when I had to choose between 6 more school years through university or 3 years (in Québec’ s wonderful CÉGEP), the choice was pretty easy.

Now, I can’t say the formation I had was bad. But looking back on it, I feel like a lot of critical information was missing. I did quite a few blunders because of information I didn’t get. I learned from my mistakes… But learning before making mistakes is even better.

One of the thing I wish they had explained in my classes is how floating points data types work.  I had so many WTF moments working with floating points variable before I finally understood what was going wrong, it’s not even funny. If you don’t know anything funny going on with floating points, get ready to have your mind blown.

So… Lets take this simple routine:

const
  MY_VALUE = 0.7;

procedure TForm1.Button1Click(Sender: TObject);
var dVal : Double;
begin
  dVal := MY_VALUE;

  if dVal <> MY_VALUE then
    ShowMessage('OMG! My CPU fails basic arithmetics!');
end;

So now, I’m asking you: Do you think the message will pop-up? The short answer is YES! (The long answer is : It depends).

The first thing that needs to be understood is that most floating points value cannot be expressed precisely in binary. Some data type like BCD(binary coded decimal) works around that problem, but BCD’s performance isn’t as good as other datatypes and thus, not usually used for most purpose. So, once encoded, what is 0.7 encoded as? I’ll use the following figure for illustration purpose :

64bits : 0.70000000003
80bits : 0.69999999999

The 2nd part of the problem come from the fact that floating literals in Delphi are of type Extended(80 bits). So, here’s what is happening with our code.  dVal is a double (only have 64 bits precision). When we compare it with MY_VALUE,  Delphi consider it like a floating point literal (thus 80 bits).  Since it can’t compare apples and oranges , it makes some juice, or in this case, upscale dVal to 80 bits precision.

Now, why wouldn’t dVal become 0.69999999999 once upscaled to 80 bits? Because it doesn’t contains 0.7, but really 0.70000000003. To work around those problems, the Math unit contains several CompareValue functions that are designed to test floating point values.

Now, for the long answer. Some of you might have tested the code above and didn’t get the popup message, while some others did. Why is that? New compiler, new rules.  One rule that did change is that under the 64 bits compiler of Delphi, the Extended type is now 64 bits.  So in WIN64, all the floating points stays in 64 bits format and doesn’t suffer from rounding/conversion errors.

From what I read, Extended became an alias to Double because the compiler is using SSE2 instructions for floating point operations instead of the x87 instructions it uses in Win32.

As for the other platforms (iOS, Android), I’m usable to test at this time.

Still, while the specifics may change, the problems linked to floating points conversion remains the same no matter which platform/language you use. If you want to dig further on the subject, you can read What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Our first idea is rarely the best

This holds true for many aspect of life. But in programming, it’s nearly a constant. Even for very simple task,  it is hard to come with the very best solution right away.

Lets take, for example, determining if a number is prime or not. The definition of a prime number is (according to wikipedia):

A prime number (or a prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself.

Based on the definition, the first implementation that comes to mind for most people (including myself) is the following.

function IsPrime(AValue : Integer) : boolean;
var I : Integer;
begin
  Result := True;
  for I := 2 to AValue - 1 do
  begin
    if AValue mod I = 0 then
      EXIT(False);
  end;
end;

This is a valid implementation. It has quite a few flaws, for example, all numbers lower than 2 will be listed as primes. Lets assume the contract of the function states AValue needs to be a positive integer larger than 1, we can now say the function gives the right answer all the time. But, what is even better than getting the right answer? Getting it FAST!  And how do we do this? Well, one of the lesson I learned from Michael Abrash’s book Zen of Code Optimisation, –“Know your data!”.

Here, our data is numbers and number properties. The property we need to observe here is the symmetrical nature of number’s divisors. Lets start with a concrete number: 30.  Its divisors are :

30 – 1, 2, 3, 5, 6, 10, 15, 30

What we can observe here is : if wemultiply the Nth term from the beginning of the list and multiply it by the Nth term from the end of the list, we always get 30 . What lies straight in the middle of the list? The square root of the number we are observing.

30 – 1, 2, 3, 5,(√30), 6, 10, 15, 30

The conclusion here is : For any divisors of X greater than X’s square root, there is also a lower divisor. In other word, if we don’t really need to divide 30 by 15, since dividing 30 by 2 does the same test. With that new knowledge in mind, we can now improve our IsPrime function.

function IsPrime(AValue : Integer) : boolean;
var I : Integer;
begin
  Result := True;
  for I := 2 to Trunc(Sqrt(AValue)) do
  begin
    if AValue mod I = 0 then
      EXIT(False);
  end;
end;

So, we went from AValue – 2 divisions all the way down to √AValue – 1 divisions. There certainly isn’t any other optimization left, right? As a pure standalone function, that’s pretty much as far as we can get. But it is still possible to go further than this, depending how much memory we want to commit to the task.

With our latest revision of IsPrime, we start by dividing by 2, then by 3, then by 4… Wait!  If 2 doesn’t divide our number, 4 certainly won’t… Why do we test for 4? The main reason is, we don’t “know” 4 isn’t a prime number. If we knew it, we would know we already tested 4 indirectly through one of it’s factor(in this case, 2). Computing this information every time we call the function would be pretty process intensive.  But… keeping a list of all the primes would take lot of memory and would also take a while to compute, no?

Actually, no! Remember, we only need to have a list of all the primes up to the square root of the number we want to test. That means that, to test the largest possible 32 bits unsigned integer(4294967295), we only need the list of all primes number from 2 to 65536. Spoiler alert!  There is only 6542 primes in that range. Those can be computed in 10s of msecs on modern day computer and only take 13 kB of memory if stored as words. (We can go as low as 8 kB if we store them as an array of bits, but it would be a lot less efficient to work with). Now, what does our code looks like?

(Full code example with comments will be available for download soon)

type
TPrimes = class abstract
strict private class var
 FPrimeList : TList<Integer>;
 FHighestValueTested : Integer;
 class procedure LoadPrimesUpTo(AValue : Integer);
strict private
 class constructor Create;
 class destructor Destroy;
public
 class function IsPrime(AValue : Integer) : Boolean;
end ;

class function TPrimes.IsPrime(AValue: Integer): Boolean;
var I, iValueRoot, idx : Integer;
begin
 if AValue <= FHighestValueTested then
   Result := FPrimeList.BinarySearch(AValue, idx)
 else
 begin
   Result := True;
   iValueRoot := Trunc(Sqrt(AValue));
   LoadPrimesUpTo(iValueRoot);
   if not FPrimeList.BinarySearch(iValueRoot, idx) then
     Dec(idx);
   for I := 0 to Min(FPrimeList.Count - 1, idx) do
   begin
     if AValue mod FPrimeList.List[I] = 0 then
       EXIT(False);
   end;
 end;
end;

class procedure TPrimes.LoadPrimesUpTo(AValue: Integer);
var
 I: Integer;
begin
 for I := FHighestValueTested + 1 to AValue do
 begin
   if IsPrime(I) then
     FPrimeList.Add(I);
   FHighestValueTested := I;
 end;
end;

In this example, I dynamically grow the list as needed. So if we don’t need to test very large numbers, we use less memory. So, to test if 4294967295 is prime, we went from up to 4294967293 divisions down to a maximum of 6542 division. I think we did good job on this!
(On a second thought, it can be divided by 5, so not that much of a gain for that specific number! 😉 )