27.12.05

The deadline

Motivated by a current project with late hours of work, this text resulted. In this project, there was a deadline until the productivity start had to happen. Of course, this deadline was given by higher instances not knowing about the work done by the divisions below them (development, operating department).
Most of you should know this scenario. Many of you, I guess, have read the excellent book The Deadline: A Novel About Project Management by Tom DeMarco.

What was the result of that impossible deadline (of course, some time later it came out that it was impossible getting ready to that end date. Early enough to move the deadline to a later date) ?
Exactly sort of things described in the remarkable blog entry IT Survivors - Staying Alive In A Software Job from Harshad Oak, which I much appreciate:
  • 6 workdays/week
  • 12+ hours a day (some freaks had the need working 15 hours...)
  • Highly-dynamic decisions (some few hours before Friday's closing time came the request from the project leader to work on Saturday)
  • A 15-hours-a-day collegue expected the others to work as long as he did
  • Some collegues being on the edge
  • Irritated glances at colleagues not being available for a meeting at 20:00 o'clock
  • Shortly announces meetings (some few minutes before)
  • No plan at all....

3.12.05

JGAP 2.5 released

Yesterday, the new version of JGAP has been released. JGAP is a genetic algorithms package, which is easy to use and which is delivered with ready-to-use components, such as genetic operators, selectors and examples.

Try it out now!

There are many references available. Check out the JGAP references page

14.11.05

Console Outputs in Unit Test

During the review of several unit tests in different projects, there are usually tests containing output statements (such as System.out.println or file output). My initial opinion to outputs in tests, especially to console outputs, is that they should be avoided in any case.
When thinking further, it seems to me that gathering data in a file could be legitimate for some situations. But console output still should be avoided in general. The assertXXX-methods or the fail-method allow for displaying any message the developer wants. So outputs to other channels (console, file etc.) need a different motivation.

What could be the motivation for a console output? Maybe to display informal messages to the developer, such as warnings or pure information. For the latter, the motivation seems not strong enought to me for justifying the console output. For the former, I could image that for such console output certain circumstances could apply of which I could not name one concretely, now. However, instead of using warnings, a unit test should fail or not fail, nothing in between, IMO. A unit test should have the chance being fine-grained enough to fulfill this postulation quite easily.

For file output or related (such as sending an email to an admin, although I have not seen this in any unit test, luckily) one could think of keeping track of statistical data to recognize a "tendendy" of behaviour, e.g. for logic depending on pseudo-randomism such as hashcodes.

To conclude, I still think in most cases it causes more harm than do good using console or file outputs.

Do you have an example where such output would be justifyable in a unit test?

19.8.05

Online survey for JCP program

This is the opportunity telling your opinion about the JCP program, including the Java Specification Requests (JSR's). But consider taking some time as you will be occupied with the questionaire for several minutes.

To the JCP online survey.

Update: the JCP online survey is closed now.

17.8.05

JSR 220 compared to JSR 250

Recently, I wrote about JSR 250, having a poor quality IMHO. Apologies for writing again about it, but as I saw the draft of JSR 220 (Enterprise JavaBeans 3.0), I felt the force to do so.

Just have a look at the public review available. It is structured so clearly, wonderful! The goals are stated in a clear, extensive way at the beginning of the document. Such could be expected of any JSR. Then read JSR 250. Be sure to also check the comments given in the public review ballot of JSR 220 and of JSR 250. Notice the slight difference. For JSR 220, JBoss congratuated "to the JSR-220 EG for delivering such a quality specification." This is the first time I read such positive comments for a JSR (maybe I did review too few of them, who knows).

Several thoughts came to my mind after the comparison:
  • The importance of a JSR will probably influence its quality.
  • The person leading the specification will have a great influence on the later outlook and quality of the JSR.
  • As some persons participated in both JSR's, it's unlikely that they influenced the work significantly.

29.7.05

JSR 250 (Common Annotations) approved: Why?

Just these days, JSR 250 has been approved in the public review ballot. Only IBM voted with "no".

I cannot understand the process entirely. Here are my reasons (also see my former entries JSR 250 (Common Annotations): Opinion and Is JSR 250 already mature?)

Some weeks ago, exactly on 2005-06-25, I send several pages of suggestions to jsr-250-comments@jcp.org (the mail given to do so). No answer, nothing. On 2005.07-21, I asked the spec. leader, Rajiv Mordani, at rajiv.mordani@sun.com to give me an explanation, why there is no answer to a well-meant, reasonable suggestion and contribution to JSR 250. Again, no answer, nothing. This is very frustrating for someone who spent several hours thinking and writing something to enhance the JSR 250 in its current form. I regard the JSR 250 as weak, premature and not acceptable in common. The comment of IBM gives at least *some* hint about the validity of my opinion.
Also, the concrete suggestions stated by individuals in a discussion about the first draft were not recognized and/or not considered.

I await the effect of the JSR to the community eagerly. Will the community accept and use the common annotations? I don't think so. But let's wait...

A frustrated blogger and neutralized JSR participant...

15.7.05

Physics and Computer Science

As I discussed in a former entry, Why Quantum Theory is important for Object Orientation, I see a connection, a conceptual link between physics and computer science.

That I am not alone with this opinion, although it might sound strange at first, could be seen by the article of Günther Meinhold: Einstein 2005: Modell und Wirklichkeit (model and reality), unfortunately for all non-German speaking people in a foreign language ;-)

The author draws the following comparisons, which I might be allowed to translate for showing the main idea behind the article:

PhysicsComputer Science
Models of real objectsModels of the software product to develop
Models demonstrate how Nature worksModels show how the future software product works
Physical modelLogical object model
Mathematical modelDesign object model
Mathematical calculationCreation/Development of source code
Mathematical results for measurable physical parametersExecutable source code with testable functionality
Experiments and observations as test basis for the models and their resultsTest of the functionality of the software against the product requirements


Yeahhh! I am not the only one out there believing in physics influencing computer science.

Anyone out there who doesn't love Einstein? Can't think so (he is not the father of the atom bomb, btw., please don't argue this way).

10.7.05

Is JSR 250 already mature?

As discussed earlier in a more general manner, I have the feeling that JSR 250 is away from a publishable form. Here are some of my concrete suggestions on how to pimp up the current paper released on June, 21st. I already sent them to the JSR commission as an RTF-document containing of several pages:
  1. Define what the JSR 250 really is about. There is no commen consensus about which aspects should be covered by the common annotations to be defined in JSR 250.
  2. I suggest including only annotations that have a generic character and support concerns of common interest that are not too complex (if they were they could be part of a separate JSR).
  3. Group annotations: It should be quite obvious how to group the currently defined annotations in the public review paper. This should be a first and easy step getting a better feeling on where the journey goes.
  4. Context-free aspects that could be covered by annotations could be:
  • Design Patterns, especially the topics
    • name of the pattern
    • role within a design pattern
    • description of the pattern itself and of its roles
  • Design by Contract-related issues
  • Architectural layers, such as (just to give an idea)
    • persistence
    • persistence mapping
    • interfaces to third-party systems
    • business logic
    • domain logic
    • transportation / protocols
    • view mapping
    • view logic
    • display

The current version of the JSR 250 public review seems not adequate for manifesting or extending it in a way that it is displaying when reading it. As I suggested, there should be a general refurbishment of the whole paper. We are far from looking at if a single annotation proposed by the JSR’s expert group is useful, should contain additional information or should be renamed.

The problem is of more principle nature: The goal should be defined in round terms, a firm stand should be taken, context-free (or, say, generic) annotations should be considered instead of special ones and the direct support for J2EE specifics should be reconsidered.

Interesting resources:

7.7.05

JSR 250 (Common Annotations): Opinion

The public review of the JSR 250 (Common Annotations for the Java Platform) has been published on June, 21st. After looking thru the list of annotations provided, I felt somewhat puzzled. IMHO, there is no clear line visible, no clear concept recognizable on which the selection of the annotation proposals is based.
I will write another entry soon explaining in more detail what I have written to the JSR 250 expert group as my proposal on how the annotations of this JSR should be choosen.

From my point of view, this JSR should mostly (exclusively?) contain annotations that are of common interest and are context free. One could joke about whether annotations specifically designed for J2EE could ever be of common interest. At least they are not context free.

Context free means in this context (intentional word repetition :-) ) that the thing annotated is context free. Take Design Patterns for example. They provide a context free concept of how to improve the architecture of your software system to allow for easier maintenance and better documentation of it.

Naming conventions for variables

After several years of experience in software development, I found a way of naming variables (attributes, fields, field declarations, as you like) that I feel convenient with.

First of all, in any class representing a business object (i.e., that contains business logic to a certain amount) there should only be private variables. Protected variables don't allow for implementing the Observer pattern! Use getters and setters instead. In few classes it may seem suitable using public fields because those classes are only used as very simple data containers.

My naming conventions for variables have evolved to the following:
  • At class level - for private and protected fields (if any) - I use the prefix m_
  • At method level there are two cases:
    a) method signature: I use prefix a_
    b) method body: No prefix, just the variable name.

Using this kit of simple concepts, it is very easy determining the scope of a variable just by looking at its name very superficially. In a setter method, the naming conventions allows to quickly find assignment errors and helps avoiding "this.", which in most cases (I saw) is regarded as ugly.

After an possible prefix the name of the variable starts with a lowercase letter. If there are multiple words concatenated together, from the second word on each word starts with an uppercase letter, such as: private int m_thisIsMyVariable;

Variables being constants by having the final static modifiers, are written all in uppercase letters. By the way, they are always public (at least if they are declared at class level).

6.6.05

Intentionally failing JUnit tests

In the JUnit mailing list, I followed a discussion about how to implement tests that are failing intentionally with an exception, i.e. the existence of an exception is seen as correct and the other case is a failed test.
Several proposals were made. Let me cite the initial comment on the subject partially:

--------------- snip
Someone in our team writes his tests like this:

try
{
articleProvider.findArticleById("foobar");
}
catch(NoObjectFoundException e)
{
assertTrue(true);
}

He says that e does it because otherwise Checkstyle would complain about the empty catch block.
I do it in this manner:

try
{
articleProvider.findArticleById("foobar");
}
catch(NoObjectFoundException e)
{
// Its okay since we expect it.
}

--------------- snap

I find the "assertTrue(true)" in the catch statement very funny (who codes like this?). It is of course legitimate and correct to take care of Checkstyle, but in this way?

Most (serious) developers in the JUnit mailing list proposed to do it the latter way (putting a comment like //ignore in the catch block), but then you again will get a Checkstyle violation.

What about my solution:
try {
   articleProvider.findArticleById("foobar"); 
catch(NoObjectFoundException e) { 
   ;// this is OK
}
The only simple thing I do is adding a semicolon to the catch block.
Semicolon itself is an empty statement, there you go...
See JGAP for a sample application.

22.2.05

JDO: My opinion

Due to the, say, failure of JDO (see the public review ballot) some developers (about a thousand) "signed" the Petition to the Java Community Process Executive Committee.

My opinion about JDO in some short statements is:
JDO is transparent in a different way than, for example, java.io.Serializable is. JDO manipulates class files, not the source code. I don't like this as you could run into difficulties when doing Test-Driven Development (TDD), and I like to see what is there (namely in the source code. There sometimes was a post on Joel on Software, The Law of Leaky Abstraction, about the problems with abstractions.).
JDO seems to be a good solution but not the best I could dream of. Just one remark to the disposedness of SAP, a really big software development company, about JDO: In later releases of the NetWeaver Developer Studio for Java, part of the huge NetWeaver architecture, JDO was not supported. Period. You could hack and get JDO out of it, but who wants to develop software that way?

I would prefer a solution based on a code generator, or, if possible, something similar to java.io.Serializable (althought the latter may be too inconcrete to argue about, as it's only about introducing a marker interface). But a code generator would add the necessary flexibility as well as the necessary capability! And you could TDD your code as you like, do code coverage analysis on source code, which would make things much more fun. Some say, don't test third-party products. But relying on "a" third party toolkit made by people not known the to-believed trusting person is not the best sort of risk management to think about. Only one argument: Open-Source. Although not bad in general, many of these projects have kind of a charme like "Ahh, there is someone who wants to contribute, I don't know him, but he is willing to add something, so let him do. And if I, the project admin, find time, I will check what he really has done". I love open source but I would not trust it generally. There is no support, in contrast to many commercial products (allow me to cut it short here).

17.2.05

When TDD is not optimal

As a big fan of TDD, I often use it, admittedly mainly in a sort of poisoned form. I am not writing all tests before writing the code. At least half of it I write after "semi-completing" a method or a bunch of methods.

These days I wanted to bring in the functionality of a third-party open source framework into code of mine. But at the beginning I was not sure if the concepts of my framework and the other one would allow a joining. Therefor it had a touch of prototyping merging some logic of the other framework into mine.
If I wrote test cases before or even during this activity, they would probably had been a waste of time, in case the joining would have not worked as I hoped. But on the other hand ensuring the correctness of the newly created logic is number one priority and most important as unknown code always is a danger for a project.

Reflecting, I am quite content with my approach not to write the tests until the current point, where I am nearly finished merging the two frameworks and see that it works in general. There is enough time for that right now, I am likely to say, although I know that some guys would raise their hands and remark about the semi-optimal appraoch chosen. An important aspect seems the experience of the developer involved in the process, from the perspective of software development in general as well as from the special perspective on the code to be extended.

Is there anyone having merged two code bases with writing test cases during or even before the merger? I would be very interested in knowing details.

15.2.05

Sourceforge in trouble?

For every sourceforge project there is a nice usage statistics reflecting the number of page views and downloads per day (or other time spans if you want).

But since January, 15th, over one month ago, no further statistical data is shown.

Sourceforge knows about this and informs that they are going onto a new statistics system:

"Project statistics data for 2005 that is omitted will not be processed until launch of the new statistics system. This data has been collected, but has not been processed for display. To ensure accuracy and reduce performance impact to users, these dates will be omitted until the new statistics system is launched. There is not currently a date set for the rollout of the new stats system. Please keep an eye here for that date when we release it."

What I wonder about is the amount of time necessary to do so. One month is about beyond any basis for discussion, I strongly believe. There must be a deeper reason and not just small problems that arose when doing the job.

1.2.05

Why Quantum Theory is important for Object Orientation

Historical information
Quantum Theory is the most precise complex theory we have today. There is no other theory of such universality and brilliancy. The theory was created by Max Planck in the year 1900. In his earlier years, Planck wanted to study physics and was told that this would not be beneficial as anything of importance had already been discovered. Good for us he did not follow that hollow advice.

After Planck laid the foundation, world-famous Albert Einstein who is being celebrated this year (Einstein year, year of Physics) because of the 100th aniversary of the Quantum Theory and his 50th day of death, extended the great theory "casually". Then, he was employed at the patent office in Bern, Switzerland and was able to think about the problems with Quantum Theory while doing his daily business (which he must have found sort of boring and that used about 1% of his intellectual capacity, I assume). That is someone you would name a genius!

Other great names correlated with the theory are Niels Bohr (danish, outstanding character and physician, mentor of Heisenberg), Werner Heisenberg (founder of the Uncertainty Principle), Erwin Schrödinger (developed wave mechanics as well as Heisenberg did, but from a different perspective as discovered later on) and not to forget Max Born as teacher of many great physicians such as Heisenberg, Robert Oppenheimer (director of the well-known Manhattan project in Los Alamos), Fermi and Edward Teller (father of the hydrogen bomb).

It's all about philosophy
For most people not having a general understanding of Quantum Physics, it's not easy to see that this theory at first is capable of explaining our daily life's experiences (such as velocity, chemical reactions etc.). One must make himself aware that Quantum Physics contains Newton's Theory which is only a special case of Quantum Physics! Secondly, Quantum Theory predicts - and that is common sense and out of discussion in physicians circles - that nothing is determinated, meaning we are ruled by randomness. If you don't believe it you are in company with Einstein but not with Stephen Hawking. At third Quantum Theory is the most precise theory we have. Period. Next, Quantum Theory and Einstein's Theory of Relativity both meet when it comes to trying to explain Black Holes. This is because Quantum Theory copes with very small particles and a Block Hole is something very very small. And it is because Einstein's theory is appropriate for explaining huge mass embodiment, such as a Black Hole.

There is no Reality
Although Quantum Theory is the best theory we have, it is not a correct and complete one. If you don't believe it, I suggest reading some books about it. The theory is all about trying to give us a quite good description of what reality might be and how we could predict future states of our reality. Heisenberg's Uncertainty Principle alone prohibits knowing about reality precisely!

As we can easily see Physics and Object Orientation are both subject to objects. The type of objects does not matter, may they be technical instances or real instances. There are no real instances as said before. Both theories, Physics as a whole, and the OO paradigm are only descriptions of a theory that is commonly seen as "good". Nothing more.

Shortfalls
So, just being logic and asking a provocative question: If:

  1. Quantum Theory is a very good theory and
  2. Quantum Theory for sure is much more complex than the theory of Object Orientation and
  3. both theories are about describing objects as part of our "reality"

then, how could someone think about OOP as a near-perfect paradigm? How could someone think OOP is a really good theory? I don't want to say that OOP is a bad thing, I love it. But as OOP is so extremely simple compared to Quantum Theory, and OOP has its weaknesses, how could we think of that there are no potentials? Many people don't think so, fortunately. But some love OO that much, that they get blind about its pitfalls. Just remember AOP as an attempt to make OOP more powerful and capable. And evertime you try to make something more powerful it will soon get too complicated. So AOP will be doomed to die out in just a few years as will classical OOP. AOP is too impratical to be used by the mass of developers out there. But it is a necessary step recognizing the possibilities and necessities to form a new theory of describing situations in a machine-understandable form.

27.1.05

Resources related to "Information Engineering"

The science I want to call Information Engineering here copes with the evaluation of data in order to to obtain information and the extraction of high-priority information (filtering out data of lower interest) from a plethora of information. Information Engineering helps reduce the problem of information overflow, just remember the latest prominent example of Cassini-Huygens.

The following resources I came up with when investigating the subject. My motivation was the advancement of the Java Genetic Algorithm Package, JGAP, developed by some other people as well as by myself.

Collection of Resources

Also see my other blog about Visions for Evolutionary Algorithms

Visions for Evolutionary Algorithms

Browing sourceforge and other open-source platforms, you find myriads of free software packages. Many of them have a glance of sophistication. But what about their practical use? An article from these days states that the NSA is in high need of a capable software tool that can identify information having a potential of being valuable. The NSA wants to "connect dots" and needs informatics to do so.

These classification problems are among the most interesting ones I could imagine nowadays (in my role as a computer scientist and "informaniac"). What I think would help bringing up a solution is Evolutionary Algorithms (EA). EA's align with nature, evolution based on Charles Darwin and inheritance from Gregor Mendel. Part of the EA approach is Genetic Algorithms (GA) and Genetic Programming (GP). The currently developed GA framework, JGAP (Java Gentic Algorithms Package), helps accomplishing both concepts. Work is currently undertaken to develop GP by adapting GA and infusing more flexibility to it.

If you are interested in knowing more, feel free to go to the sourceforge site of JGAP to see project information, or use the project homepage submitting several information about GA's of general interest.

A valuable resource about Genetical Engineering can be found in wired magazine with their article Life, Reinvented.

Also related: Games that make leaders: top researchers on the rise of play in business and education

24.1.05

Java UI Frameworks: Far way to satisfaction!

Similar to my former entry, Java Persistence: Failed, I have been stimulated by another entry, this time from Alexey Maslov, who asks Do we need another UI framework?.

In my comment to his entry, I stated that in my eyes, there is no real UI framework existent for Java, being capable of representing a common platform for UI development. Either it is too expensive (I want a framework for free, please. Look at Delphi, SAP etc. where several standardized mechanisms are there to satisfy the developer). Or it is instable, too hard to learn, not capable enough etc.

Sorry, I have not found a single framework fitting my simple needs. When trying out Luxor/XUL at first I thought there is light at the end of the tunnel. But then I soon recongnized the limized features when it comes to event-handling and other non-regular stuff (what is irregular about event-handling, one might ask).

As long as we ask questions such as Why does GridBagLayout get tab order wrong? we have a far way to go to satisfaction!

Developers Don't Write Documentation

An article titled Developers Don't Write Documentation can be found at OpenXource. It is written with some wit and is worth reading. As the article's title expresses, it is the opinion of the author that developers shouldn't focus on writing documentation. Documentation is meant here as documentation for the user of the software not technical documentation I assume.

Reasons
Well, I also assume that most developers love to code or construct and develop an architecture but don't like to write that much plain text for the stupid user (ironically, of course). This seems human. When working on a doctorate you notice that thinking about great ideas, getting inspirations, drawing graphics and schemata is refreshing and satisfying most time. But when it comes to clearly structuring the stuff, choosing correct, concise and consistent formulations (3 C's Law?) then the brain soon gets stuck, often.

For coding software the reason could be the break in paradigms between writing code and writing natural language. The latter implies a more or less informal system, a non-consistent grammar, ambiguous expressions and no compiler checking everything. The only thing you have at hand is perhaps the word processor aiding to some extend with simple grammar checks embodying the intelligence of a stupid child (which is astonishing as intelligent software is rare and hard to craft, IMO).

Test-Driven Development
There always is a problem letting an outsider (not a developer) write documentation. How does he know in which way the system works and what the intentions of the developers are? In my eyes, a documenter needs to understand at least a bit of software development. And if he knows too much, he would probably love to code and would not be the right person for documenting.

To get out of the mess a little, I like to suggest the idea of Test-Driven Development (TDD). TDD is not only a possibility of validating your code work as expected but also (or mainly!) a documentation tool. TDD expresses what your code should do. It's the same with Design Patterns, although many people don't recognize the documentation power of both Design Patterns and TDD!

So go write your tests and with that create a basis for other people understand your well-validated masterpiece in order to write documentation for it. Isn't that a valid approach?

23.1.05

Test-Driven Development: Useful Techniques, Resources

Test-Driven Development (TDD) is subject of many publications. I feel that it never was seen as a hype. From my point of view, it is a really useful concept helping great in reducing errors. That seems proven, IMO. I used TDD in several projects, currently with JGAP and with a software package based on JRefactory.

I found out that many test cases rely on the same testing techniques, such as:

Type Cast
When testing if the return value of a method call conforms with a certain type, you could write:

   assertTrue("My String",vector.get(0));

Better would be:

   assertTrue("My String",(String)vector.get(0)); 

The latter ensures that we really get a String type back. The former could also be tue if the toString() method of the type returned is accordingly.
You could also use Type Case if you only wanted to ensure that an element at a given index in the list is not null and conforms to a special type.

Elements in unordered collection
If the result of a method under test is an unorded List or a Map, then it should nevertheless be tested whether obligatory elements exist in the collection and others do not. We can easily accomplish this by firstly building up an internal Map (e.g. java.util.HashMap) by putting in all elements that are expected to be returned by the method unter test.
After that we perform the method call. Then, we iterate over the returned list and call a helper method such as

   assertInList(final Map list, String s); 

This method could read like:
public void assertInList(final Map list, String s) {

  if (list.containsKey(s)) {
list.remove(s);
}
else {
fail("Object " + s + " not in list!");
}
}
Of course, you could extend this helper method for handling types delivered in the java.lang package. Sometimes a developer states java.lang.Comparable and sometimes he omits the package. By extending method assertInList to check for a string concatenated by java.lang and the string originally searched, we could make these instances equal.

Don't forget to write

     assertEquals(0, list.size());
after testing against the obligatory elements to ensure no other (unwanted) element is in the return list under test.

Molecular tests
It is not forbidden to include multiple asserts within one testcase. The advantage of only one assertion (or more generally: check) in a test case is that one could immediately see the point of failure. But with my IDE I could easily figure out the assertion faiiling just one mouse click later. Multipel assertions in one test case help validate more than one thing without the need of constructing again and again the input configuration (instantiatng objects, setting parameters).

Resources
Other publications about TDD: