Friday, November 16, 2012

knock...knock

...who's there?

*yawn*

I know... good ole' knock knock jokes are over knocked... (uhhh... no pun inten... meh!!)

So I'm thinking of blogging again...

...just thinking

What am I doing up so late anyways?

#justthinking

Thursday, December 04, 2008

New Project - URLCron

Following up my quiet annoucements yesterday on Erlami, Erlcfg and Fastiga, I've just pushed another project out to github... UrlCron.

The way I architechture out platforms, closely mimicks the way UNIX programs are advisedly written... i.e. Small Programs Which Do One Thing, Do Them Well, And Can Read From Standard In and Write To Standard Out In A Simple Format.

In our case, most of our applications usually are made of up many RESTfull webservices, all talking to each other via HTTP Calls and JSON responses. The UIs are all Heavily ClientSide javascripts, that also talk to the various webservices. Normally, each application will have one authoritative application server, which is also a RESTfull webservice, that is directly responsible for knowing *everything* about this particular application.

For instance, one of our media campaign engines has an appserver which deals with authentication, authorization, media setup, etc for various front ends (HTTP clients, SMS clients, IVR clients, etc).

Following this very very loose coupling and disconnected architechture, we tend to evolve lots of small stand alone systems. And a lot of times, we need to set timeouts, callbacks, schedules a lot of activities that should be deferred etc. This can all quickly degenerate into complexity and incompatibility and lots of duplicate code, and can become very language dependent. Also, the option of using the venerable cron daemon to call urls from shell scripts is not very nice, and is too much of a hack.

Enter URLCron. UrlCron is designed to fit into our disconnected mesh of services, to allow us to schedule a call from any service to call any other service and then store the results. The scheduling service can come back at a later datetime to check the status.

In its current implementation on github, its just a few days old and is not yet mature, but the entire concept works end to end.

The readme on github contains a lot of ideas of where I would like to logically take this to, and some notes on architechture and design.

Someone else wants to have fun poking around at this? Go crazyyyy!!!!

Ace out!

Labels: , , , , , ,

Wednesday, December 03, 2008

Sweet November

November was a good month. Yeah really was... and yeah i know... the world is in general economic recession, but hey... Obama won!!! :)

Anyways... last month, saw three of our inhouse projects quietly released on github. My github page is at:

http://github.com/essiene

I'll just use this post to talk briefly on the three projects.

ErlAMI

ErlAMI (pronounced Erl AM I) was my first serious Erlang project that was not hot off the tutorial presses :)

I cut my proverbial Erlang teeth on this project. It is an Asterisk AMI protocol library done in Erlang/OTP and trust me, I've learned a huge load just doing this project.

I needed to build an automated dialler based on Asterisk as part of our inhouse product stack, and defintely needed to talk to Asterisk. This was a project that I had started in Java a couple of months back, but after taking the red pill, I decided that the Matrix had me, and I just had to do this in Erlang.

At the time I started it, I was still reading Joe Armstrong's "Programming Erlang", so walking through the git
history will feel like a trip through Joe's book.

I started by rolling my own structures and processes everywhere, learning new features of the language and using them, learning eunit and other things and rolling them in.

Eventually, I had built my own poor man's OTP. I had my primitive supervision trees, FSMs, servers, even a poor straw man's event manager and handler framework.

Impressive, but as I finally completed the OTP parts of the book, I realized I had to port to OTP because of the immense gains, and just the raw amount of iron clad tested code I would be basing on. I did just that, and the result is the current Erlami.

This one will take a while to mature, but i'm continually finetunning my ideas and feeding them back here, and there are some funny bugs and corner/edge case issues here and there, which will get cleaned up with more use.

Erlcfg

This is the new kid on the block, and an idea I just had to execute... plus, since last year, I've made it a job of mine to learn compiler tools on every language that I know and use. In this case, I just had to get my hands on leex and yecc.

The idea is simply my own extrapolation of the Java properties file format.

Normally a properties file looks like:

some.key = value
some.other.key = other value
some.other.other_key = other other value

I like this because it is naturally namespaced, is eye friendlier than XML and my java SDP platform uses properties files strictly for configuration. The only problem is verbosity, which is caused by non-nesting, i.e. The format does not allow itself to be refactorable. For instance, examining:

some.key = value
some.other.key = other value
some.other.other_key = other other value
some.other.other.key = other value

It would be more maintainable to rewrite this as:

some = {
key = "value";

other = {
key = "other value";
other_key = "other other value";

other = {
key = "other value";
};
};
};

Also, this bears some resemblance to JSON and in some way YAML. Well... erlcfg aims to do just this, and add something else... VARIABLES! Checkout the README for a good complex config file example.

Currently, the syntax is very strict (notice the extra assignment operators and semicolons... annoying!) to help make parsing easier. But since I have achieved my initial goals, I'll go back and make it easier to write.

Also, I have some interesting plans, like adding XML DTD type support, or simple type annotations to help in verification of config file syntax, easily, this would allow you to define:

server {
port = integer;
listen = list(string);

log {
data = string;
level = atom;
}
}

And the use this to verify

server = {
port = 111;
listen = ("192.167.10.1", "192.168.10.1");

log = {
data = "/var/log/applog";
level= debug;
};
};

I would then love to be able to verify the file by doing:

$ erlcfg --verify /var/lib/configfile.dtd /etc/config.conf

This is still a nice dream right now, but trust me to keep hacking at this till it is done. Did I also mention that it supports all erlang terms except the tuple?


Fastiga

This one is a scala project. It is an implementation of an Asterisk FastAGI container in scala.

This really came about from me having a very specific way I wanted to build my AGI applications, i.e. like the Simple State Machines that they are. Basically, almost like you would build a parser from a BNF specification, I wanted to build my AGI applications to match my state diagrams.

So I set out to eliminate IFs with MATCHes, build case classes that would allow those matches and employ scalar Actors and Tail Recursion to make it all work nicely. Also, using reflection, we can host multiple AGI applications inside just one container, which currently is deployed inside of another servlet container, hence allowing more packagable code and promoting better code reuse.

You work on your AGI logic, let us handle the running and hosting framework.

Its still a work in progress, but the main ideas are realized and it is being used heavily in-house.

We actually have a *thin* framework around it, (much like the OTP is a framework around Erlang) which
I'll tidy up and make a part of the project.


Looking Forward

Honestly, more things are coming down the pipe... lots of ideas and tools that we're using inhouse, which I believe will be valuable to some other team of harrassed developers like us. :)

So this is hoping for a more productive December and onwards.

Ace... out!

Saturday, November 01, 2008

Erlang/Scala... gateway to State Machine Oriented Programming

WARNING: LONG ARTICLE AHEAD, GRAB YOUR FAVOURITE BEVERAGE.

Background

For the past couple of weeks, my team has been working on a version two of one of our apps.

The first version was basically a Python application, that called out to a Java REST webservice. This webservice talked to asterisk via AMI.

There was also some python AGI scripts also for doing automated IVR sessions.

It worked, but there where a lot of ideas after the first version was done. So I did what any sanity loving team lead would do.... branched out git repo and started work on version 2. Now, our version 1.0stable is available for clients, but we're shooting to quickly start recommending version 2.0 and already, just after 3 weeks of very hard work, we're smiling and approaching the end.

New Beginnings Architecture
The structure of the new application is this:

The Java AMI REST webservice, has been replaced by an Erlang/OTP/Mnesia/Mochiweb webservice. If you have any idea what those terms mean, you can already see that I've gained a lot of features.

My Prioritized FIFOs are now distributed across multiple nodes... at almost zero coding cost (I didn't find a comfortable way of doing this in Java). (We'll be releasing the Erlang AMI library next Month as open source software).

The python AGI scripts have been replaced with Scala AGI application modules, seated in a novel FastAGI server built also in scala. (This server will also be released as open source).

The rest of the application is a Comet/Oribited/JQuery/Pylons over mod_wsgi mix.

Why Oh Why?!!!
Now, this is the fun part... why?

For the last 1.5+ years I've been working exclusively in Telecoms integration middleware and SDP space, building and deploying services for very agrressive customers, one of them now the fastest growing Telecoms company in Nigeria (if not Africa). And that experience left me with some major impressions:

  • My approach was still error prone.
  • The problems are simple but exacting
  • There is NO ROOM for NON-EXACTNESS... ZERO!!!
I learnt this lessons, rather shockingly, since I consider myself a carefull artisan, always striving to improve my art, and having done so more than a rather no-too-small percentage of others :)

Enter Application Patterns
The first thing I noticed was that all the applications follow a particular pattern, and I rightly set about building a framework to easily enable writing of these applications... I'm pretty proud of that framework and its humming silently in more than 3 different sized telecoms networks.

The framework has adapters to connect to various network elements, USSD Gateways, SMS Gateways, MSCs, Mediation Servers, etc and convert all incoming messages into a common format (based on HTTP URLS). These requests are routed on the framework router to the Service endpoints... the Handlers.

The handlers are the core of the framwork and do all the SCP, AAA, etc, connections and business logic.

After writing a couple of the handlers, more similarities stood out, so I kept fine-tuning that part of the framework untill very little of it was left to be customized for just the business logic. The plan is to eventually develop a DSL that can be used to write that part and deploy in realtime.

Where Did I Go Wrong
Along the way, I noticed that some of the applications would still evade my exception handling. Also, once in a while, when traffic would spike, I would get some errors which I *shouldn't* be getting. I would
rise to the occasion and fix them, but after one pretty hairy issue with some lost transactions, I really began to question my entire foundations.

There seemed to be something amiss, I couldn't seem to translate the very simple logic flow into similar Java code, without some corner and edge suprises (and I unittest A LOT!). Something was definitely missing.

I would solve the problem, but decided that I needed to change something very fundamental.

Does This Tunnel Have An End?
That was when I started my gazillionth trek up the Functional programming hill - out of desperation. I had dabbled in Haskell 2 years ago and left off, because there where no practical libraries. I'd had a brief fling and excitement with Ocaml around the time I dropped Haskell, but dropped it too, though I really liked Ocaml and was more productive, I dropped it because it didn't have good libraries (these situations are changing as I write... just last month or so, both Ocaml and Haskell have annouced Battries Included projects... hooray).

I had also touched Erlang but hurriedly dropped it... I mean... ewwwwwww.... was that even readable syntax? I had played with various dialects of lisp, (common lisp, newlisp, a bit of scheme), but somehow, my inner programmer didn't gel with any of these, at least not yet. Maybe it was just the timing, but I like to think that I just wasn't ready, I hadn't faced my own waterloo class of problems yet.

The Matrix Has You... errr... Scala
Anyway, this time, as I started exploring other options again, something was different. I was looking out for a language that:
  • Would allow me to write code without carrying tooo much state information about, so I didnt' get shot in the foot when I least expected
  • Had very broad library coverage
  • If possible allow me build on what I'd already done.
  • Could allow me build my dream distributed persistence framework (in my class of problems, Map/Reduce hence Hadoop/HBase/CTable is not the solution... I was looking for a mature DHT or something better with object serialization support (Scalaris,Mnesia!).
  • Easily allow me scale out without worrying about concurrency issues, locks, synchronication, etc.
Anyways, that was when I first came across Scala the language. (www.scala-lang.org)

Scala combines very strict typing, functional and object oriented paradigms with very sexy type inference into one badass language that runs on the JVM, (and I hear .NET CLR too).

I couldn't believe it, so I took my time dipping my feet in the pool and sampling.

Learning Strategy... the Stalker/Prey Approach
There is a way that I like to learn when I'm really really serious. I search to web for all sort of references and tutorials, blogs, articles and what nots about my prey (well... technology doesn't sound as cool :) ), and then take my time, going thru all the trivial examples, code snippets, hanging out on IRC, etc... just to immerse myself in the feel on the technology. When I have a reasonable feel or the technolofy and community, I now dive in head first via the "recommended text or tutorial".

Long story short, I began to stalk Scala. Funnily, in so doing I became the prey! I know... its weird, but eventually, Scala and I just "clicked". I don't know what did it, if it was the special sweet spot on the JVM, or the combined OOP and Functional paradigms, or the fact that I could import and just use my Vast Array of company code that had been writen already in Java. I don't know but we just clicked. And I knew I had to see this one through.

My first plan was to investigate how to write a servlet in Scala and see if I could eventually swap out my Java handlers with Scala handlers. I started investigating that productively, but kept happening along another big mine that blew up in my mind...

Erlang: A Second Coming
As I read and played with Scala, the Actors library implementation just kept coming up and how it was a clone of something that was native in Erlang. Well, I decided that if Scala stole it from Erlang, it had to be worth finding out the main thing itself. I made up my mind, and I picked up a second victim.

Still syntax-wary from my last attempt at Erlang, I went for the stalking now with questions on my mind.

Then a funny thing happened. This time, I waltzed over the Erlang syntax... and I still don't know why, but that syntax just seemed different, instead of weird. After I kept at it for a couple of days, the veil simply lifted.

So What's The Bling?
Erlang has so much goodies its hard to imag ine that its open source. Its usually features that rich, that make companies like Microsoft, keep close tabs on their languages.

Anyway, my easy way of explaining Erlang is this: Erlang feels like a domain specific language for highly available and highly scalable applications with very low error rates and high integration to other environments.

Take some time and read that again. Out of the box, erlang allows you to very cheaply create concurrent applications that can be distributed on mulitple network nodes. Its a very basic concept, not an advanced language feature. (spawn, !)

These processes are encouraged to be built for failure, so you don't try to prevent your processes or nodes from dieing (A concept I call Fighting Your Exceptions). Instead you encourage them to die, quickly! Erlang provides process parenting/monitoring as another very basic concept, to allow you restart dead processes. (spawn_link, link, monitor, supervisor)

Finally, what I'd always loved in C and detested in Java (low-level bit manipulations) are present in a very convenient data type: Binaries

This means a lot to me. I used to fall back to C/D to write low-level binary protocol clients, and use the library to build a proxy-daemon which spoke plain-text, and then connect to that from Java, Python, etc... now I just do the entire back-ends in Erlang!

Programming As State Machine Construction
Finally, I come to the main point.

Has any of these helped me solve my problems with my previous approach? Hell Yeah!!!

  • Immutability in Scala/Erlang helps me avoid a very nasty class of concurrency problems... race conditions in multithreaded Java code.
  • Immutability also encourages localized variable definitions, which encourages what I call "terminal functions". These are functions whose scopes terminate in themselves. The have all they need inside them, without looking for a global or class wide state. Everything is either passed in or initialized there. And debugging them is WAAAAAAYYYYY simpler :)
  • Higher Order Functions, Anonymous functions and Currying, provide some of the bassaddest refactoring tools known to man (and woman... if she can program :) ). Its just sooooo cool refactoring code in these languages. Even Python has to take a back seat sometimes, and that is saying a LOT because Python is darned flexible.
  • Tail Recursion (probably my favourite feature): Allows me to write pure State Machines. Some event driven, some acceptor state machines. This has been my biggest boost in the last couple of months. I sit down and come up with 3 types of diagrams:
  1. An application layering diagram, that gives me the various service layers and defines the protocols, contracts and messages between each layer.
  2. A State Transition Diagram or and State Table, which describes completely all the states for each layer and the messages and transitions between them
  3. A flowchat for each state in each state State Machine in each application layer. I now sit down and virtually translate the diagram into code that works without suprises!! I never felt this was possible in such a short time.

Conclusion
I use the Erlang/OTP framework, especialy gen_server, gen_fsm and gen_event, a lot to simplify my life and use Scala Actors intensively. I plan to try out the Scala/OTP library that is being built here: http://github.com/jboner/scala-otp/tree/master

I'm in the business of software, and one of our company motos and policies is to solve hard problems that plague our clients. We continually come up with innovative solutions and deliver them, and most of them end up being used by lots of concurrent users.

Scala and Erlang have shown in a very short time, that the future will be pretty different from what we currently know today. I think the era of the Swarm is upon us, where most of our applications are going to be used by a myriad of users. The ability to be able to write correct code that is easily debuggable and run in a scalable environment is going to be more and more of a differentiating factor, and hopefully, that can translate into pre-forty year old retirement ;)

In all seriousness, if you've not looked at a Functional programming language for your choice platform, you do your self a serious disservice, which can only be fixed by looking into one, SERIOUSLY as soon as possible.

Ace... out.

Labels: , , , , ,

Friday, October 17, 2008

Client/Server design and implementation: Discovering the symmetry in implementation

I experienced a paradigm shift in my design approach recently, which I'd like to put into writing, so I can formulate the rough thoughts in my head better.

Background
I've been working on an Erlang AMI library implementation for a couple of weeks now.

I set out initially to write a simple AMI Client library that I could use to implement my application. On the way, I had to learn eunit so I could thoroughly test as I built, not to mention, quickly build test the small blocks before integrating them.

After I built a working prototype (with plenty of bugs which I was well aware of), I knew I had to unittest this properly, if I wanted to have confidence in the code not to mention bragging rights for well tested code.

First a problem
The problem with testing external connections like this is usually bootstrapping. If I have asterisk installed on my dev machine, writing the tests is not a problem. But if J. Random Hacker grabs my source, and
out of habit types:

make test

They're only going to get pass marks only if they have asterisk configured, and by some telepathic ethos among hackers, also set up his asterisk instance to have the same username and password as my own, among other things (channels, etc). Unfortunately, this doesn't happen as much as we'd love it to :).

There is a way around this using Mock frameworks (which I've never used by the way :D). Anyways, I did what any hacker worth his mettle would do, I sucked it up, stashed the client code in its buggy state, and went ahead to implement a simulator from the same code base.

You what?!!!

Yup... I'm building AmiSym, which is an Asterisk AMI Simulator, and it shipswith the ami library (so you get a client and server for the price of just the client). Now onto the cool parts of doing this.


Advantage Number 1 - More experience
You can never over emphasise the value of increased experience when solving a domain specific problem. In my case, I'm basically solving the AMI problem a second time, even though it is from another perspective
(more on that below). This give me more exposure to the problem, than I would have had if I'd done a one-time-coding-pass and written the client library.

Infact, I end up tackling the AMI library problem 3 times. Once when I implemented the prototype. Once when I implemented the simulator. And once when I reimplement the client with the experience I've gained from implementing the simulator.


Advantage Number 2 - The other side
If I had stuck to writing just the client, I would probably come up with a well done client (I mean, I love to consider myself a not-so-shabby programmer), but now that I am also implementing the server side, I get
to see the problem from both sides of the coin. And that has impacted the way I understand the communication and the mechanisms that the client has to implement because I'm designing both the producer and consumer.

For instance, I created an internal data structure to store the Key: Value pairs that are AMI responses, with a simple hack to deal with things like action: command results, which contain extra data that
don't come in Key: Value pairs.

This worked satisfactorily for as far as I was just a client. But once I started implementing the simulator, I decided I had to use the same data structure on both sides of the connection, immedietly, I found out my
Data structure wouldn't be symmetric. That Is, I couldn't do:

AMI RAW DATA => RESPONSE PARSER => INTERNAL REPRESENTATION

and then


INTERNAL REPRESENTATION => DESERIALIZER => AMI RAW DATA

This was a problem, since it would mean, either two code bases, or just mean that I should redesign my INTERNAL REPRESENTATION, which is what I did.

So I now have a more robust Data Structure, because I took a trip to the other side.


Advantage Number 3 - Borrowing From the other side
If these were the only advantages, the code would already be better off, but there is more. My design of the simulator State Machine and internal processes, end up being very sensible. I attribute this to the fact that my first attempt at writing the simulator is in actualty, my second attempt at tackling an AMI library core.

Now, going back to the client, and I'm borrowing a MIRROR image of the simulator process in the client. Simply put, this makes a *lot* of sense. It makes sense for a Client and Server of the same protocol, to be mirror images of each other, such that when plugged together, and a message in inserted into any them, it will loop through the virtual ring that is formed and "theoritically" come back in the same representation.

This is all kind of abstract, but just think of two halves of a hoop that fit perfectly together... or YinYang... each groove in one is complemented by an extrusion on the other, so they're really just each other, only turned inside out.

I think that approaching any problem space from this perpective produces a much more robust and complete design and coded implementation. Or to state it in cooler terms... realizing the Zen of YinYang in
Client/Server systems yeilds better code and a better design.


Advantage Number 4 - Modularity
The final advantage I want to bring up is extreme modularity. Usually, one can relate to modularity in its more common form. Take for instance a problem where we write a generic TCP framework that allows one to inherit and extend to either an HTTP or SMTP module.

This I would like to call forward-divergent modularity, which refers to a single code base being made modular to allow the core functionality to be easily diverged down an irreversible path.

This is not type of modularity I gained. The type of modularity I'm referring to, which I'd like to call cyclic-divergent modularity would best be described with the help of some funky diagrams. Enjoy :P

The figure above shows what I have termed Forward-Divergent Modularity. Each of UNIQUE CODE A and B have a common core or set of common core modules, which do similar stuff (like open a socket, setup a generic session, store client_id/ip mappings in a HashMap, etc), but then each of them have their unique portions (SMTP protocol versus HTTP protocol).

I call it forward-divergent, because a virtual message travelling through this system, would either start from CORE and move towards A, or move towards B. A system that is structured this way is an Either/Or system. Nothing would make a single message cross camps. You would not be able to "virtually" LOOP the above system at the UNIQUE CODE POINTS without introducing a protocol converter (mostly impractical) which
confirms that even though they have the same base, they're different logical systems.

Now onto cycle-divergence.


The figure above shows what I term cyclic-divergent. As you can see, this entire system turns out to be a Single virtual system, even though there is some uniqueness in A and B.

In this case, a "virtual" system message can be inserted at any point, and it should theoritically, go through various transforms, but by the time it returned to its originating point, it should be back to the same format it originated in.


In Conclusion - Dude Stop Being Abstract
Ok, I'll stop. :)

I don't know about you, but I'm always thinking about code I write, and realisations like these, make me a better systems designer and implementor. For instance, now that I know that I'm actually building a single virtual unit, I have been able to identify that common core in my code base now.

My work from here on out is to widen the common core as much as possible, so that the unique parts are very small. This will result in a more robust client library and server library, that both depend on a well tested and shaken core. This is most decidedly better than implementing one in absense of the other would have resulted in.

As an aside, I'm not so sure I would have seen this using a mock framework (I've never used one, so I don't know). This is not an anti-mock framework rant, just an observation anyways. My personal conclusion then is that whenever one is implementing a protocol-ish library, there could be a lot to gain from also building the Other Side of the protocol in the same code base. The important point of note is IN THE SAME CODE BASE.

So if I were to write an HTTP server, I would also write a client in the same code base, and use that in my testing. Too much work? Yeah, maybe... but its fun... and its got its perks :)

Peace out.

Labels: , , , , , ,

Sunday, October 12, 2008

Quick link to creating custom Erlang behaviours

One of the things I liked about Python when I learnt it back then, was
how it exposed what it was doing under the hood in a simple way.

For instance, passing self as the first argument of class methods made
lots of sense coming from an Abstract Data Type use case with C
Structs,
I could immedietly see the relationship.

Anyway, some parts of Erlang are like that too... and trust me... its
a really really small and concise language (too concise? :)),
anyways... if you've
ever tried to do any sort of Frameworky thingy in an OOP language,
you've had to use Interfaces or something similar (hopefully you've
done this right? and not left
your users to figure out at [compile | run] time why things are
borking sooooo badly).

Anyways... since Erlang has no classes and there are nothing like
Interfaces, they implement a very nice feature called Behaviours which
do some thing similar to Interfaces. At least, tthat's the
simplest way to think about Erlang Behaviours.

Here's a quick link showing how to create yours... short and straight
to the point: http://www.trapexit.org/Defining_Your_Own_Behaviour

Asterisk AMI Library in the works

We're building an interesting application in the office, and Asterisk is
a big part of that application.

The component I'm working on communicates with Asterisk via AMI mostly
for originating calls, and I have a version of it in Java working with
some small bugs here and there.

I decided some weeks back to rebuild that component in Erlang, so I'm
currently working on an Asterisk AMI library in Erlang. This is my first
"real" app in erlang (The ring benchmark exercise in Programming Erlang
doesn't count :) ).

For me, when working on a project, I *really* love to get unit tests up
and running. They are a great way to "break the ice" and really really
really help keep code maintainable and keep regressions in check as
codebase grows. So last week, I finally downloaded and went thru the
docs for Eunit, and started using it, first for the Ring benchmark (I
should post that code somewhere) exercise and now for the AMI library.
Its pretty neat and simple.

Since I'm still new to Erlang, I'm learning and applying so this
library. The library will eventually be open sourced and then I'll be
found out for what I really am... a cheap fraud who can't code to save
his left pinkie :)

Anyways... just finished some unit testing and all works, so I'm going
to bed in high spirits :)

--Zzzzzzz

Saturday, October 11, 2008

Back Online

Well, I'm back to blogging and the whole online presence thing.

But there was a reason I deliberately stopped blogging. I realized that
the internet is a very interesting medium of communication. People that
don't even know you will get to read the things that you write.

This is all cool, but the problem is that no matter how shallow or
uncooked your theories (or reasonings) are, some people who are just too
lazy to fathom things out by themselves, will swallow what you have to
say, hook, line, sinker and dare I say boat with the captain and crew!!!
I decided then to stop blogging until I was sure that I had something
fairly well cooked to contribute to the community at large.

So, why am I back? Am I saying that now I'm a rock star programmer or
something akin to that? Well... one small part of me wishes that were
the bare truth, but I really can't judge my "rock-starNESS" (or
awesomeness to boot!), but I can honestly write about a wider breath of
interesting topics that I have had a lot more close encounters with,
than before...

Just to give a brief preview, I have spent the past 1year+, working on
an SDP (Service Deployment Platform) that is now in production
deployment in a number of tel cos here. That has been probably the most
satisfying experience I've gained, and has in turn led me onto my
current path which I toe as a programmer. On this path, I've picked up
and continue to pick up things like Parsers (JavaCC, Apaged, Lex/Yacc),
Distributed, Redundant and Reliable Systems and their construction
(DHTs, Heartbeat, Mnesia, MySql replication), Functional Programming
Languages (Ocaml, Scala, Erlang), OTP (Did i mention Erlang already?!).

I have learnt sooooo much technology in a short while and waaaaaay much
more about crafting reliable systems and managing code complexity to
produce easily evolving code bases that you don't have to pray to God
each time you do an upgrade.

From here on out, I'll be blogging a bit more often, not sprouty and
all authoritative, but just staying on the stuff that I've learnt and
now know. Hopefully, these will make for some fun reading and mayhap...
helpfull.