## Hiatus Warning

I haven't posted here in a while. (Since getting to SF actually).

I haven't posted here in a while. (Since getting to SF actually).

My previous survey of opinionated elf logic wasn't purely an academic exercise. I originally wrote the argument for `dynamic`

multimethods in C# using Steve's Opinionated Elf example because I was explaining to someone how to handle Message routing between client and server in a multiplayer game. Namely, I have a set of Message types that all subclass a base Message:

```
class Message { }
class ChatMessage { }
class MoveMessage { }
class ActionEventMessage { }
class LoginMessage { }
```

and I was trying to explain how to divide up the logic between client-side and server-side components. you could write different client/server logic for individual message types.

```
class ClientsideMessageHandler {
void Route(Message m) { Handle((dynamic)m); }
void Handle(Message m) { /* unhandled message type ? */ }
void Handle(ChatMessage m) { ChatWindow.Show(m.MessageSource, m.MessageText); }
void Handle(MoveMessave m) { Entities.Find(m.EntityID).Move(m.newLocation); }
// etc
}
class ServerMessageHandler {
void Route(Message m) { Handle((dynamic) m); }
void Handle(Message m) { /* unhandled message type ? */ }
void Handle(ChatMessage m) { /* route chat message to listening clients */ }
void Handle(MoveMessage m) { /* validate then update nearby clients */ }
}
```

In my opinion this approach is better than explicit type-checking, both in terms of DRY and in terms of maintaining type heirarchies.

```
void Route(Message m) {
var c = m as ChatMessage;
if (c != null) { Handle(c); return; }
.. etc..
}
```

So after that example, lets examine the performance. With the caveat the benchmarks can be misleading, here are some numbers from running a simple MessageHandler that counts the number of different types of messages it's getting. I implemented a basic no-runtime-typing handler that maintains separate typed queues for each message type.

**1 Million Message Calls:**

- No Virtual calls: 2.5ms
- Explicit Type Checking: 15ms (*)
- Visitor: 17ms
`dynamic`

: 236ms

So, a basic method dispatch takes a couple nanoseconds, adding in some basic type checking adds 10 nanoseconds, but with a caveat: this takes time proportional the number of types you're checking. for 20 or 30 types, the time would be 10 times as large. The Visitor Pattern manages to be the fastest double-dispatch mechanism, taking 17 nanoseconds regardless of how many types are used. Finally, dynamic turns out to be the slowest method, taking a full 235 nanoseconds per invocation. A quick sanity check in CPython:

```
>> timeit('for m in items: handler.handle(m)')
0.201900066844
```

Shows that this is about the same as the cost of a dynamic method invocation in Python. So, how many messages are you going to be routing per second? 100? A 23 microsecond routing cost is fairly easy to swallow in exchange for the simple code. Routing a 100,000 messages per second? If you only have 10 microseconds to respond to each message, it might be better not to waste 200+ nanoseconds per message on the routing dispatch.

Dear reader, have you experienced the treat that is Steve Yegge's writing? If not, I suggest that you go read some of his best writings now instead of wasting time here. Partly because his writings are charmingly entertaining, and partly because I want to discuss a topic that he's already written a bit about.

Now let's say one of your users wants to come in and write a little OpinionatedElf monster. ... Let's say the OpinionatedElf's sole purpose in life is to proclaim whether it likes other monsters or not. It sits on your shoulder, and whenever you run into, say, an Orc, it screams bloodthirstily: "I hate Orcs!!! Aaaaaargh!!!"

Steve's elf is a fairly harmless creature; it sits on your shoulder and simply judges everyone it meets. Some creatures it likes, some creatures it hates. Let's example some different ways to tackle this problem. Note that Steve covers a lot of ground, such as extensibility. I'm going to focus on the slightly narrower problem of code ugliness and maintenance hassle, but ultimately adhering to strong principles of encapsulation helps us out with extensibility as well.

**Idea #1: Polymorphic method**: Standard OOP logic. If you have a behavior that all creatures share but differ on implementation, it should be a virtual method. Make `Creature`

define a `virtual bool doesTheElfLikeMe()`

method, and creatures have to extend. The good thing about this is that with a proper type heirarchy, you can defer to base implementations. The bad thing is that a very abstract class, `Creature`

now has a very specific concern, `doesTheElfLikeMe`

, in its definition.

```
Creature::doesTheElfLikeMe () { return true; }
Orc::doesTheElfLikeMe() { return false; }
Spider::doesTheElfLikeMe() { return false; }
HarmlessPetSpider::doesTheElfLikeMe() { return true; }
class Elf {
void Greet(Creature c) {
say(c.doesTheElfLikeMe() ? "hi, friend" : "DIE FIEND");
}
}
```

In this example, The elf can be perfectly happy about a `HarmLessPetSpider`

, which subclasses `Spider`

, while still hating Shelob the EvilGiantSpider, and EvilGiantSpider doesn't even need its own implementation of doesTheElfLikeMe because it defaults to the generic Spider behavior. However, adding a new Elf, or an OpinionatedGoose, involves goine to each individual Creature source file (possibly ones you don't have source access to), and adding a new method. This seems like a maintenance nightmare.

**Idea #2: Monkey-patching** Steve gives an example of a Ruby definition where when the Elf file gets loaded, it opens up all the classes it cares about and patches in a `doesTheElfLikeMe`

function. This avoids the design issue of having the Elf's preferences scattered around a large number of different code files. This works in most dynamic languages. Here's an example of Ruby and Javascript.

```
// in the Elf.rb:
class HarmlessPetSpider
def doesElfLikeMe; return true; end
end
// in Elf.js
Spider.prototype.doesElfLikeMe = function() { return false; };
```

The main issue that remains with monkeypatching (aside from incompatibility with static type systems) is that the data is still not encapsulated in the `Elf`

module. Now, this may be simply my preference and experience with statically typed languages, but I don't like the idea of a system design where modules and types are giving each other functions all willy-nilly. Where is the separation of concerns? the encapsulation?

**Idea #3: Runtime type checks** Why does `Creature`

need to define `doesTheElfLikeMe`

in the polymorphic example? Shouldn't the Elf figure out whether he likes a creature or not? The code does seem to be very backwards. If we added an OpinionatedGoose we shouldn't have to open up every single creature file again to add a `doesTheGooseLikeMe`

method as well. If we simply put the logic in the Elf or Goose class, maintenance of the feature becomes simpler.

```
class Elf {
bool DoesElfLike(Creature c) {
if (c instanceof Orc) return false;
if (c instanceof HarmlessPetSpider) return true;
if (c instanceof Spider) return false;
return true;
}
}
```

Of course, this has its own drawbacks. Now we've lost the type-heirachy. meaning, if I accidentally put the `instanceof Spider`

check before the `instanceof HarmlessPetSpider`

check, our type system is not going to tell me that I made a logic error, I've simply introduced a bug into the code.

**Idea #4 -- Visitor Pattern** Ah, The good old Gang of Four has not deserted us in our time of need. As it turns out, runtime multiple-dispatch is a fairly common pattern, common enough that quite programmers frequently independently come up with a pattern on their own that has come to be known as the *Visitor* pattern.

```
interface CreatureVisitor {
void DoesElfLike(Creature c);
void DoesElfLike(Orc c);
void DoesElfLike(Spider c);
void DoesElfLike(HarmlessPetSpider c);
}
class Orc : Creature {
void accept(CreatureVisitor v) { v.Visit(this); }
}
class Spider : Creature {
void accept(CreatureVisitor v) { v.Visit(this); }
}
.. etc ..
class Elf : CreatureVisitor {
void DoesElfLike(Creature c) { c.accept(this); }
void Visit(Creature c) { Greet(true); }
void Visit(Spider s) { Greet(false); }
void Visit(HarmlessPetSpider s) { Greet(true); }
void Visit(Orc o) { Greet(false); }
}
```

hmmm. Well, now we've added a bunch of boilerplate on every single Creature type, since every single one of them has to define its accept method. And now, your "list of all subclasses of creature" is hard-wired into your CreatureVisitor. When you need to add a new Creature, you'll have to add it to the list of all your visitors and your base CreatureVisitor interface. This could really be a big issue. Shelob herself is not going to bother us a whole lot, since her GiantEvilSpider can still inherit Spider.accept behavior, but actually adding a new type that the Elf has an opinion on involves changing CreatureVisitor and all its implementors, including any OpinionatedGeese, even if their opinion hasn't changed. Is this much boilerplate really worth the hassle?

Well, maybe. In "enterprise"-land, You'd expect the CreatureVisitor bits to be downstream of the Creature bits. So you could manage change by first adding a new Creature, then updating your Visitors in one go. Incidentally, this suggests that writing general-use visitor types actually leads to more brittle code, since you'd want them packaged as close to the implemetation of the visitors as possible to void breakage. Your Creature taxonomy becomes aggravating to modify, but your Visitor taxonomy becomes easier to modify. The benefit becomes noticeble when you get more Visitor-like things. When your OpinionatedGoose and his pal OpinionatedToaster, or even other functions based on Creatures, such as Serializers or Renderers or what-have-you, come around and they have their own ideas about what they want to do to a Creature, then your machinery is already there; the `accept`

s and `Visit`

slots are ready for implementation. Incidentally, this Creature-axis rigidity behavior comes up in functional programming languages fairly often, since in one of my random musings I realized the visitor pattern is an OOP analogue of functional pattern matching. But that is another story and shall be told another time.

But there will always be a little nagging voice in the back of your head. Your sense of DRY is tingling a bit because of the duplicated accept on all Creature types, and especially the list of types on the CreatureVisitor interface. Why do we have to add this boilerplate to implement something as fundamental as double dispatch?

**Idea #5 -- polymorphic dynamic dispatch in C#** C++/Java/C# resolve virtual functions via runtime-dispatch the type of the bound object but not on the types of the arguments. Our dynamic language friends such as Python/Ruby/ do not support a polymorphic dispatch; although they do route method calls at run-time, they do not discriminate method signatures based on type arguments. Some typed languages, such as Lisp, support dispatching based on the runtime types of the argument. While neither naive dynamic binding nor naive static binding can implement multimethods, the introduction of scoped dynamic dispatch in C# gives us a new way of implementing our OpinionatedElf:

```
class Elf {
bool DoesElfLike(Creature c) { return doesElfLikeType((dynamic)c); }
bool DoesElfLikeType(Creature c) { return true; }
bool DoesElfLikeType(Spider s) { return false; }
bool DoesElfLikeType(HarmlessPetSpider s) { return true; }
bool DoesElfLikeType(Orc o) { return false; }
}
```

And there we go. We avoid touching external files like Idea #1 (adding methods to all Creatures for each opinion). We have all the logic encapsulated in our class without monkey patching anything like in Idea #2. We are about as succinct as explicit type-checking in Idea #3 without losing the type heirarchy information and being subject to ordering errors. And, we avoid the boilerplate required for Idea #4, the Visitor pattern.

##FSharp Raytracer Part 4 - Specular Reflection and Mirrors

This episode, we're going to add reflection into our system! Luckily, reflections are extremely easy to add to a standard raytracer. Lets take a look at our previous raytrace function:

```
let raytrace ray =
match closestIntersection (scene |> Seq.cast) ray with
| None -> Vector3.Zero
| Some(i) -> lambert i lights
```

We first need to add a depth-counter to the `Ray`

type so we know when to stop reflecting. Otherwise, we'd keep bouncing around for too long in a scene with a lot of reflections. We'll initialize it so something like `N=3`

when we start the ray-cast. Then we'll have to calculate the reflected ray for each intersection. Luckily, we already have our incoming ray and normal vector, so it is a matter of simple vector math to arrive at the formula for the reflected ray:

```
let incident = Vector3.Normalize(-ray.Dir)
let reflected = 2.f * item.normal * (dot incident i.normal) - incident
```

We then recursively call raytrace again with the new ray, taking the current lambert term and adding 30% of the reflected light to our current point.

```
let rec raytrace ray =
if ray.N < 0 then Vector3.Zero
else match closestIntersection (scene |> Seq.cast) ray with
| None -> Vector3.Zero
| Some(i) ->
let incident = Vector3.Normalize(-ray.Dir)
let reflected = 2.f * i.normal * (dot incident i.normal) - incident
lambert i lights + 0.3f * raytrace { Pos = i.point; Dir = reflected; N = ray.N - 1 }
```

##FSharp Raytracer Part 3 - Phong Shading / Lambertian Reflection

Shading is the process of taking a point on a surface and calculating exactly what color we see there. In real life, a surface absorbs and emits light in all directions. The light that we see is the portion of light from that surface that is emitted directly towards our eyes (or camera).

In this installment, we'll simplify the equation down to just the diffuse reflection, i.e. the *k _{d}* term. This is called Lambertian reflection, after the guy who proposed this model in the 1700's.

Shading function for lambert reflection:

```
let lambert (i:Intersection<Shape>) =
let rawcolor = dummyshade i
let illumination (l:Light) =
let lm' = l.Pos - i.point |> Vector3.Normalize // direction from point to light (normalized)
rawcolor * dot lm' i.normal
Lights |> Seq.map illumination |> Seq.reduce (+)
```

Note that we just use our buddy from the last post, dummyshade, to grab a random color for each shape as the base color. We then write a function `illumination`

that calculates the contribution of color from each light in the scene. then in the final line of F# `Lights |> Seq.map illumination |> Seq.reduce (+)`

just a `map/reduce`

telling us to sum up all the illuminations for every light, and return the sum.

If our naive plan for raytracing is simply "find the intersection of each object with our ray" we still need to write a function that intersects a ray with a sphere. One very valuable compilation of intersection algorithms is the Real-time Rendering website (which has a very excellent companion book).

A Sphere is all the points that are some distance R away from a center point. A ray is the locus of points `origin + t * direction`

for all `t > 0`

. Thus, we can substitute the ray equation (`point(t) = ray.origin + t * ray.direction`

) into the circle equation (`| point - center | = radius`

) to get `|rayOrigin - center + t * rayDirection| = radius`

. This produces a quadratic equation in `t`

that we can use the quadratic formula to solve.

```
// solving for t after expanding combined ray and circle equations.
A = |rayDirection|^2
B = 2 * (rayDirection . (rayOrigin - center))
C = |(rayOrigin - center)|^2 - radius ^ 2
A * t^2 + B * t + C = 0
```

Full intersection code:

```
let raysphere pos, r, ray =
let RC = ray.Pos - pos
let A = ray.Dir.LengthSquared
let B = dot (2.0f * ray.Dir) RC
let C = RC.LengthSquared - r * r
let disc = B*B - 4.0f * A * C
if disc < 0.0f then None // no intersection
else
// we only take the negative root here so we only take the near (front) point
let t = (-B-sqrt(disc))/2.0f/A
if t < 0.0f then None // sphere is behind camera
else
let point = t * ray.Dir + ray.Pos
let normal = Vector3.Normalize(point - pos)
Some({ time = t; point = point; normal = normal; shape = this })
```

Now that we have a good way of detecting intersections, we need to figure out a good way to decide what color the intersection point should be rendered as. For the next post, I'll talk about one of the most popular and simple shaders out there: Phong shading. For now, we can write a dummy shader that just dumps a random color based on the hash of the object:

```
let dummyShade intersection =
let n = intersection.shape.GetHashCode()
Vector3(n / 100.f % 1.f, n / 1000.f % 1.f, n / 10000.f % 1.f)
```

And here is the result, in all its glory:

Up next: Phong shading! So our spheres look a little bit more 3D.

For my inaugural blog series, I think I'll start by talking a bit about ray tracing. 3D Graphics has always been one of my interests, and an easy way to get to understand some of the concepts is by writing your own ray tracer! Plus, the end result is fairly pretty.

**Vectors**- A vector is a sequence of N numbers that represents a point or direction in space. We'll mostly be using 3D vectors such as (0,0,0), since we're dealing with 3D space. We will have to add vectors, scale vectors, take the length of a vector, and take the cross product of two vectors.**Matrices**- Where a vector is a sequence of N numbers, a matrix is a sequence of NxN numbers. Matrices are useful in 3D Math because they can be used to define*transformations*on vectors. Rotation, translation, scale, perspective, can all be encoded as Matrices.**Coordinate Systems**- We commonly work in multiple frames of reference. The one that is most familiar is "world coordinates", in which X,Y, and Z represent direction along 3 orthogonal vectors. By one convention, X is left/right, Y is up/down, and Z is in/out. Another coordinate system we'll be dealing with is "screen coordinates", in which X and Y go from -1 to 1 as the point goes from the bottom left corner of the screen to the top right corner.**Object Space**- coordinates relative to a particular object. The origin will usually correspond to the "center" or "pivot point" of that object.**World Space**- coordinates that are "fixed". Objects and the camera view will move relative to world space.**Camera Space**(Eye Space) - Coordinates centered on the current view. This is a system that is usually rotated and translated from world coordinates but not stretched or scaled in any way (they are*isometrically isomorphic*, if you want to be precise)**Clip Space**- Okay, this is the first non-linear space we've encountered. This means that a rectangle in clip-space is not a rectangle in world space. Clip Space exists inside a frustum that is bounded by the camera bounds on the X- and Y- axes and the near/far clip planes on the Z- axis. All our visible objects lie within this clip-space rectangle (which is actually a frustum)**Screen Space**- Clip space is easily normalized by dividing by W. This results in X/Y ranging from -1 to 1, which we can map to pixel-based window coordinates easily based on the size of our window and the number of pixels.

Okay. Lets get started. I'm going to use OpenTK as a 3D Matrix/vector library.

**The data** of our scene we can just define up front. I'm going to simplify things and place the camera at the world origin. Then we can express all object data in world coordinates as well and ignore the difference between object/world/camera space. This laziness allows us to skip using a model-view transformation. Also, I'll only deal with Spheres for now. This simplifies our data definition to:

```
let scene = [
Sphere(Vector3(0.f, -1000.f, 0.f), 998.f)
Sphere(Vector3(-1.f, -0.f, 5.f), 1.f)
Sphere(Vector3(2.f, -0.f, 5.f), 2.f)
]
let Lights = [ ] // TODO
```

**The main body** of our ray tracer is going to trace one ray for each pixel on the screen. One key assumption we can start off with is that each ray is independent of every other ray. This way, our main loop is trivially simple.

```
for y in 0..h-1 do
for x in 0..w-1 do
data.[x,y] <- raytrace(x,y)
```

for each pixel coordinate, we need to transform to screen space, pick a near/far value for Z, then "unproject" a ray into camera/world space based on our camera values. Since we're working in a vastly simplified "clip space = world space" assumption, unproject simply returns the input vector's X/Y/Z coordinates as the ray endpoint.

```
let windowToScreen (wx, xy) = (float wx - w) / w, (float wy - h) / h
(* given a point in clipspace, convert to world-space *)
let unproject (v4:Vector4) =
// our unproject is an identity transform, which is why ray.Dir is v4.XYZ
{ Dir = v4.Xyz; Pos = Vector3.Zero }
let raytrace (row, column) =
let screenx, screeny = windowToScreen (row,column)
let ray = unproject (Vector4(screenx,screeny,1,1))
match closestIntersection scene ray with
| None -> Color.Black
| Some(item) -> Color.White
```

Here is our total code so far ... and the output. Haha. It's black! We haven't coded any lighting, ray intersections, or shading yet.

- Part 2: Ray/Sphere Intersections
- Part 3: Simple Phong Shading
- Part 4: Specular Reflection
- Part 5: More Geometry Types
- Part 6: Ambient Occlusion
- Part 7: Environment Mapping
- Part 8: Spherical Harmonics

If you're a programmer, you have likely seen the "FizzBuzz" problem at some point.

Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".

The nice thing about "FizzBuzz" as an interview question is that it is on the lower threshold of triviality, yet it manages to cover I/O, looping, conditionals, and arithmetic. With that said, here is how I'd write out FizzBuzz:

```
static void FizzBuzz() {
for(int i=0; i<100; i++) {
Console.WriteLine(
i % 15 == 0 ? "FizzBuzz" :
i % 3 == 0 ? "Fizz" :
i % 5 == 0 ? "Buzz" :
(i.ToString()));
}
}
```

Hmmm. For some reason, this small bit of code is already going to cause discussion. You could easily find a zillion different ways of writing FizzBuzz, the classic almost-trivial program, and everyone will have differing opinions on how to write it. As it stands, I've heard feedback such as "ugh, ternary operator is obfuscation" or "you need to comment that modulus operation since your maintainer won't understand it."

Ultimately, **I can't say that these differing opinions are wrong**. Everyone is a Blub Programmer, me included, So things we don't use look *goddamn cryptic* and things we've decided not to use look *childlike in their simplicity*. However, it seems to me that you'd be most productive in a team with other programmers of the same Blub-index, or at least in a team where everyone's index falls into the tolerance level for everyone else. Perhaps programmers get righteous in defending their favorite programming language or coding style in hopes that they convert others to their Blub level?

My most proficient languages are C#, F#, Javacript, Python, and Java. Perhaps it's just because it's what I write at work every day, but my Blub-level is somewhere between C# and F#. For every language that I write in, some parts feel really pleasant, like the ease of manipulating the shape of data in LINQ, or refining a DOM selection in jQuery (which Microsoft should totally have paid jresig to call Linq2DOM), or mocking an interface in Java with an anonymous instance. And other parts feel kind of awkward. But how much of that is the implicit tradeoffs we as programmers make every day versus the habits that I've built up, I'm not really sure.

[Code Bits] [Code Musing] [Language Rants]