Sunday, February 26, 2006

Thunderbird Fcc extension

Finally I found a Thunderbird extension that allows me to select the Fcc folder on the compose/reply panel.
Not only can you control which folder the mail (once sent) should be filed into, but also
  • not to save it at all (no fcc)
  • on a reply to also move the original email to the same folder



The latter option simply shows that whoever wrote this extension, knows what (s)he is doing. This is just awesome.

Wednesday, February 22, 2006

SOA vs The Architects

With SOA and/or the concept of composite applications we will see an increase in ready-to-use business-services as well as technical-services.
This is the promise of SOA and a pre-requisite for composite applications.

What makes me wonder is, who will control access to those ready-to-use services? Just the term ready-to-use makes me panic.
And I don't mean just security wise? But rather from a logical level ?

Let's say I'm a developer and working on a composite application, or just in need of a service, that others already did and were friendly enough to provide to the whole enterprise.
Let's say - as in almost all examples - a credit check.
Let's say this is some in-house customer-service task I'm automating that from time-to-time needs to check the credit of a customer or prospect, e.g. 100 times a day.

With the help of the service registry I will be able to discover the credit check service and use it from my application (think UDDI/WSDL if that helps, but those are just protocols and formats to facilitate that).
So I'm happy, I found the service and I'm going to use that service.

There will probably some security restrictions as to which user is allowed to connect to the credit check. Good.
So I contact the supplier/owner of said service, tell him why and how I need access to it, and I will be granted access.
I finish my development/testing/whatever task and deploy my application to production. Everything runs smoothly, everybody is happy.

Half a year later, I'm working on a different application, and again need the credit check. I again look to the service registry, find the credit check service I'm famliar with (or I just remember it), and start using it for the new application.

This time, however, I'm working on a web application that issues quotes to prospective customers, and I have to include a credit check for the final calculation of the quote. The estimated number of quotes is several thousands a day (because I'm just a bit smaller then Amazon but still huge... ok).

Who will tell me, that using the (same) credit check service is still OK for me? Where can I find which load the credit check service is able to absorb? And at which load it is already running? There might be already 10 applications that use the credit check that comprise 90% of the whole capacity it was designed for, and now I'm adding another 70%? Who will keep me from doing so ?
Who will be able to do impact analysis on all the other 10 applications?

The only person/organisation that comes to mind is either the enterprise architects or some newly established integration architects/specialist.

But those guys need to have a more operational role than in the past. They need to know the general state of their services quite will. For the past, today and the months to come.

Isn't this a major shift in expectations from those groups?
Are they ready? Are the operatoin departments ready to let others, i.e. architects, etc, look into their system on a level usually reserverd to operators & andministrators ?

Saturday, February 18, 2006

wikiCalc

wikiCalc (by softwaregarden) is a cool AJAX application that combines the "public" or rather collaborative editing approach of a wiki with the more traditional way of (structured?) data entry in a spread-sheet like this one here.
I've only tried this out myself (i.e. alone), but I have this feeling that this might really help in collaborative calc-like effort, e.g. when a group of people work on a business case or some other calculation with more than just one input.

Friday, February 17, 2006

AJAX again

well since everyone talks about AJAX all the time, I might as well do so, too.
Here's a good resource (hub) for it: ajaxian

Tuesday, February 14, 2006

Loosely coupled monitoring

In the world of SOA, webservices and (generally) loosely coupling systems and applications, monitoring gets a new quality. And I really mean not just "more important" but a new quality.

I still can't say what it will have to look like, but the current (system/server centric) approach, with monitoring more or less single system (from a building block perspective) performance, availability, capacity, etc, etc will not be able to cover SOA et al.

I guess it will become more important to measure and know where a certain instance or class of message (or document) is using what kind of resources (or blocking them).
Won't monitoring "probes" become actual attributes or tags of the messages that traverse through the various buses and systems, being updated at each (logical) hop and at the same time updating some performance counters on those hops?

To me this looks like really orthogonal to today's approach.

Sunday, February 12, 2006

Tim Bray on Java vs LAMP

First of all, I don't think there is a real "vs." there, but anyway, its a catchy title.
Look here at what Tim actually has to say on this, I really love it.

Friday, February 10, 2006

Is there a fair price for software?

In a recent discussion a customer (or rather: a prospect) said that a certain price for a given piece of software was to high for him, because he would not "utilize everything the software can do".

Aha.

This again made me think of the metrics we nowadays use for licensing software:
  • # of CPUs (or cores, or sockets, whatever)
  • # of nodes
  • # of instances
  • # of users (and derivatives like # of mailboxes, ...)
  • GB, TB, ...
A couple of years ago Sun started with the Java Enterprise System model which is based on # of employees.
(At this time I should disclose that I do work for Sun Microsystems.)
Yet another model, but this time not a technical one, but a purely commercial.
One advantage is that in general larger companies tend to pay more (more # of employees) and smaller companies pay less.
Another advantage is that you separate architecture decisions from commercial decisions.
Need a an additional node/cpu for your appserver? Go ahead, install it, use it. Whithout having to think about license implications.

That's what counts for me: get the commercial/administrative part out of the technical/architectural discussions and decisions.
In a way Oracle did the same, although they (usually) charge per CPU (or users...). Still a better model then charging per table or per tablespace, or - even worse - per transaction... (Guess everyone would turn of autocommit, then...)
Also, if you pay per CPU this at least some indicator on the performance and therefore value... not perfect, but still.

So why are we discussing this at all?

It's not only because of open-source and "free" software.
IMHO "free" software could only come to life because there was a debate about software pricing in the beginning.
Software does not deliver any tangible good - as opposed to servers or PCs or dish-washers.
With software you don't see and cannot touch what you are paying for.
That's why earlier software publishers seemed to define the value of the software by the amount of (printed) documentation they shipped with it.
Remember those huge software boxes that contained like 3-10 floppy disks and 3 tons of documentation? More docs, more money.
Now with software being downloaded from the internet, or delivered in packages only marginally larger then the CD/DVD case, this indicator vanished. Thank god.

But you still get the famous "software does not cost anything apart from the CD" every now and then.
Or the equally common "I could write this in 5 days, so why are you charging 100kUSD for this?"

There seem to be 2 schools of thought.
  1. cost-based,
    i.e. charge based on the effort to write and support the software
  2. value-based,
    i.e. charge based on the (perceived) value the software brings to the customer/user

Redhat, Solaris, and all others that charge (directly or indirectly) for support fall into the first category.
Oralce, BEA, and even the Sun Java ES model fall into the second one... since their effort doesn't change with your number of CPUs (or employees), but (in a way) your value does.

Which is better? I don't know...
Which is more fair? Don't know either...
Which will prevail? Beats me, but we see a trend towards cost-based in the recent years, don't we.

And we're not even touching the software as a service discussion here...