Friday, October 17, 2008

Client/Server design and implementation: Discovering the symmetry in implementation

I experienced a paradigm shift in my design approach recently, which I'd like to put into writing, so I can formulate the rough thoughts in my head better.

I've been working on an Erlang AMI library implementation for a couple of weeks now.

I set out initially to write a simple AMI Client library that I could use to implement my application. On the way, I had to learn eunit so I could thoroughly test as I built, not to mention, quickly build test the small blocks before integrating them.

After I built a working prototype (with plenty of bugs which I was well aware of), I knew I had to unittest this properly, if I wanted to have confidence in the code not to mention bragging rights for well tested code.

First a problem
The problem with testing external connections like this is usually bootstrapping. If I have asterisk installed on my dev machine, writing the tests is not a problem. But if J. Random Hacker grabs my source, and
out of habit types:

make test

They're only going to get pass marks only if they have asterisk configured, and by some telepathic ethos among hackers, also set up his asterisk instance to have the same username and password as my own, among other things (channels, etc). Unfortunately, this doesn't happen as much as we'd love it to :).

There is a way around this using Mock frameworks (which I've never used by the way :D). Anyways, I did what any hacker worth his mettle would do, I sucked it up, stashed the client code in its buggy state, and went ahead to implement a simulator from the same code base.

You what?!!!

Yup... I'm building AmiSym, which is an Asterisk AMI Simulator, and it shipswith the ami library (so you get a client and server for the price of just the client). Now onto the cool parts of doing this.

Advantage Number 1 - More experience
You can never over emphasise the value of increased experience when solving a domain specific problem. In my case, I'm basically solving the AMI problem a second time, even though it is from another perspective
(more on that below). This give me more exposure to the problem, than I would have had if I'd done a one-time-coding-pass and written the client library.

Infact, I end up tackling the AMI library problem 3 times. Once when I implemented the prototype. Once when I implemented the simulator. And once when I reimplement the client with the experience I've gained from implementing the simulator.

Advantage Number 2 - The other side
If I had stuck to writing just the client, I would probably come up with a well done client (I mean, I love to consider myself a not-so-shabby programmer), but now that I am also implementing the server side, I get
to see the problem from both sides of the coin. And that has impacted the way I understand the communication and the mechanisms that the client has to implement because I'm designing both the producer and consumer.

For instance, I created an internal data structure to store the Key: Value pairs that are AMI responses, with a simple hack to deal with things like action: command results, which contain extra data that
don't come in Key: Value pairs.

This worked satisfactorily for as far as I was just a client. But once I started implementing the simulator, I decided I had to use the same data structure on both sides of the connection, immedietly, I found out my
Data structure wouldn't be symmetric. That Is, I couldn't do:


and then


This was a problem, since it would mean, either two code bases, or just mean that I should redesign my INTERNAL REPRESENTATION, which is what I did.

So I now have a more robust Data Structure, because I took a trip to the other side.

Advantage Number 3 - Borrowing From the other side
If these were the only advantages, the code would already be better off, but there is more. My design of the simulator State Machine and internal processes, end up being very sensible. I attribute this to the fact that my first attempt at writing the simulator is in actualty, my second attempt at tackling an AMI library core.

Now, going back to the client, and I'm borrowing a MIRROR image of the simulator process in the client. Simply put, this makes a *lot* of sense. It makes sense for a Client and Server of the same protocol, to be mirror images of each other, such that when plugged together, and a message in inserted into any them, it will loop through the virtual ring that is formed and "theoritically" come back in the same representation.

This is all kind of abstract, but just think of two halves of a hoop that fit perfectly together... or YinYang... each groove in one is complemented by an extrusion on the other, so they're really just each other, only turned inside out.

I think that approaching any problem space from this perpective produces a much more robust and complete design and coded implementation. Or to state it in cooler terms... realizing the Zen of YinYang in
Client/Server systems yeilds better code and a better design.

Advantage Number 4 - Modularity
The final advantage I want to bring up is extreme modularity. Usually, one can relate to modularity in its more common form. Take for instance a problem where we write a generic TCP framework that allows one to inherit and extend to either an HTTP or SMTP module.

This I would like to call forward-divergent modularity, which refers to a single code base being made modular to allow the core functionality to be easily diverged down an irreversible path.

This is not type of modularity I gained. The type of modularity I'm referring to, which I'd like to call cyclic-divergent modularity would best be described with the help of some funky diagrams. Enjoy :P

The figure above shows what I have termed Forward-Divergent Modularity. Each of UNIQUE CODE A and B have a common core or set of common core modules, which do similar stuff (like open a socket, setup a generic session, store client_id/ip mappings in a HashMap, etc), but then each of them have their unique portions (SMTP protocol versus HTTP protocol).

I call it forward-divergent, because a virtual message travelling through this system, would either start from CORE and move towards A, or move towards B. A system that is structured this way is an Either/Or system. Nothing would make a single message cross camps. You would not be able to "virtually" LOOP the above system at the UNIQUE CODE POINTS without introducing a protocol converter (mostly impractical) which
confirms that even though they have the same base, they're different logical systems.

Now onto cycle-divergence.

The figure above shows what I term cyclic-divergent. As you can see, this entire system turns out to be a Single virtual system, even though there is some uniqueness in A and B.

In this case, a "virtual" system message can be inserted at any point, and it should theoritically, go through various transforms, but by the time it returned to its originating point, it should be back to the same format it originated in.

In Conclusion - Dude Stop Being Abstract
Ok, I'll stop. :)

I don't know about you, but I'm always thinking about code I write, and realisations like these, make me a better systems designer and implementor. For instance, now that I know that I'm actually building a single virtual unit, I have been able to identify that common core in my code base now.

My work from here on out is to widen the common core as much as possible, so that the unique parts are very small. This will result in a more robust client library and server library, that both depend on a well tested and shaken core. This is most decidedly better than implementing one in absense of the other would have resulted in.

As an aside, I'm not so sure I would have seen this using a mock framework (I've never used one, so I don't know). This is not an anti-mock framework rant, just an observation anyways. My personal conclusion then is that whenever one is implementing a protocol-ish library, there could be a lot to gain from also building the Other Side of the protocol in the same code base. The important point of note is IN THE SAME CODE BASE.

So if I were to write an HTTP server, I would also write a client in the same code base, and use that in my testing. Too much work? Yeah, maybe... but its fun... and its got its perks :)

Peace out.

Labels: , , , , , ,

Sunday, October 12, 2008

Quick link to creating custom Erlang behaviours

One of the things I liked about Python when I learnt it back then, was
how it exposed what it was doing under the hood in a simple way.

For instance, passing self as the first argument of class methods made
lots of sense coming from an Abstract Data Type use case with C
I could immedietly see the relationship.

Anyway, some parts of Erlang are like that too... and trust me... its
a really really small and concise language (too concise? :)),
anyways... if you've
ever tried to do any sort of Frameworky thingy in an OOP language,
you've had to use Interfaces or something similar (hopefully you've
done this right? and not left
your users to figure out at [compile | run] time why things are
borking sooooo badly).

Anyways... since Erlang has no classes and there are nothing like
Interfaces, they implement a very nice feature called Behaviours which
do some thing similar to Interfaces. At least, tthat's the
simplest way to think about Erlang Behaviours.

Here's a quick link showing how to create yours... short and straight
to the point:

Asterisk AMI Library in the works

We're building an interesting application in the office, and Asterisk is
a big part of that application.

The component I'm working on communicates with Asterisk via AMI mostly
for originating calls, and I have a version of it in Java working with
some small bugs here and there.

I decided some weeks back to rebuild that component in Erlang, so I'm
currently working on an Asterisk AMI library in Erlang. This is my first
"real" app in erlang (The ring benchmark exercise in Programming Erlang
doesn't count :) ).

For me, when working on a project, I *really* love to get unit tests up
and running. They are a great way to "break the ice" and really really
really help keep code maintainable and keep regressions in check as
codebase grows. So last week, I finally downloaded and went thru the
docs for Eunit, and started using it, first for the Ring benchmark (I
should post that code somewhere) exercise and now for the AMI library.
Its pretty neat and simple.

Since I'm still new to Erlang, I'm learning and applying so this
library. The library will eventually be open sourced and then I'll be
found out for what I really am... a cheap fraud who can't code to save
his left pinkie :)

Anyways... just finished some unit testing and all works, so I'm going
to bed in high spirits :)


Saturday, October 11, 2008

Back Online

Well, I'm back to blogging and the whole online presence thing.

But there was a reason I deliberately stopped blogging. I realized that
the internet is a very interesting medium of communication. People that
don't even know you will get to read the things that you write.

This is all cool, but the problem is that no matter how shallow or
uncooked your theories (or reasonings) are, some people who are just too
lazy to fathom things out by themselves, will swallow what you have to
say, hook, line, sinker and dare I say boat with the captain and crew!!!
I decided then to stop blogging until I was sure that I had something
fairly well cooked to contribute to the community at large.

So, why am I back? Am I saying that now I'm a rock star programmer or
something akin to that? Well... one small part of me wishes that were
the bare truth, but I really can't judge my "rock-starNESS" (or
awesomeness to boot!), but I can honestly write about a wider breath of
interesting topics that I have had a lot more close encounters with,
than before...

Just to give a brief preview, I have spent the past 1year+, working on
an SDP (Service Deployment Platform) that is now in production
deployment in a number of tel cos here. That has been probably the most
satisfying experience I've gained, and has in turn led me onto my
current path which I toe as a programmer. On this path, I've picked up
and continue to pick up things like Parsers (JavaCC, Apaged, Lex/Yacc),
Distributed, Redundant and Reliable Systems and their construction
(DHTs, Heartbeat, Mnesia, MySql replication), Functional Programming
Languages (Ocaml, Scala, Erlang), OTP (Did i mention Erlang already?!).

I have learnt sooooo much technology in a short while and waaaaaay much
more about crafting reliable systems and managing code complexity to
produce easily evolving code bases that you don't have to pray to God
each time you do an upgrade.

From here on out, I'll be blogging a bit more often, not sprouty and
all authoritative, but just staying on the stuff that I've learnt and
now know. Hopefully, these will make for some fun reading and mayhap...