CyberTech Rambler

October 29, 2008

Google settles with publisher… and I am dissapointed.

Filed under: Uncategorized — ctrambler @ 2:54 pm

I haven’t read the settlement yet, but like PJ, I am disappointed that we miss the chance to clarify fair use.

I emphatise with Google and understand that it just wants to get on with the project.

How difficult is it to install Windows XP?

Filed under: Uncategorized — ctrambler @ 2:51 pm

Just yesterday, I have to downgrade the two Sony Vaio we bought from Windows Vista to Windows XP. The two Sony Vaio we got are top-of-the-range models, so expensive that I got a kind reminder that we have a limited budget.

I thought reinstallation is going to be a bliss. Pop in the Windows XP downgrade DVD and puff… I should get a working XP installation in say, 1 hour top. After all, we know that XP downgrade is still very common, and the computer is a “consumer grade” product, and it is not only a Sony but top of the range Sony.

The final count? 3 hours per computer from start to end.

The downgrade DVD quality is just bad, extremely bad. Rather than fully install the whole operating system and customize the drivers etc fully for the notebook in one step. The installation of XP just put the bare-metal XP on it. One then need to dive into the CD to install the bare basic drivers to get Vaio to connect to the internet. The sound isn’t working, the bluetooth not working. In short, at this stage, you get an crazily expensive server with internet connection and no more.

Then you have to connect to Sony website to download not one, but two zip files. No, its not just unzip and execute the installer. You need to follow instruction given to you on the website. In all, you need to run around the download zip packages, clicking one installer after another and go into “Device Manager” to install those that does not have installer. Then, one-by-one, you get sound, bluetooth etc…

At the end of the day what do I get? A Windows XP computer that I still have to connect to Microsoft to download patches to keep it uptodate, and download other softwares expected of a modern computer, individually I must add.

What do I learn from this experience? Never attempt it again. Get someone else to do it.

Let’s admit it. What effectively happens is Sony sold me a very expensive white box with all the components inside and expects me to install XP on it. It provides a step-by-step guide. The guide is reasonably accurate but do need computing expertise to interpret it correctly and with confidence. In other words, the only difference between getting it from Sony or getting a customized from my local computer shop is my local computer shop cannot give me a step-by-step guide to install the software.

What if I wanted Linux instead? I simply have to pop in a Linux distribution into the DVD drive, I will have a fully functioning computer in under one hour. One more hour and an internet connection means I would had updated my installation with latest patches and installed all the software I need. Not only did I save time, I need not devote my full attention to clicking buttons to get things going. I simply tell the computer I want, then let it gets on with it.

What this show us is that the fact that most of the time Windows come out-of-the-box means everyone is unaware of the effort needed to get a working Windows computer. A lot of people were under the impression that Windows is easy to install. I did not expect window installation to be easy, but I did not expect it to be so hard either. In fact, my experience shows that the lack of need to make sure most users can install Windows means Windows installation procedure falls behind Linux very badly.

It is a valuable experience, but one I don’t care to repeat. Did I have to do it? Unfortunately for me, yes. I need to know whether we have the expertise at work to do a downgrade. My conclusion is we don’t, unless the manufacturer give us a step-by-step guide. Even then, we need someone reasonably clue up with computers to do it.

Microsoft rips Aussie Care Centres? A bit too dramatic

Filed under: Uncategorized — ctrambler @ 2:20 pm

DownUnder, there is a news trickling out that Microsoft raised the licensing fee for Aussie Care Centres but had since back-tracked. It is a good demonstration of the peril of vendor lock-in and I think Microsoft has the duty of care to its customer for not raising the prices so suddenly, if it exercieses its right to raise prices.

Personally, I think we will probably see a rise in licensing fee for them. If we do, it will not be so dramatic.

I don’t buy Microsoft’s reason for this rise. It says it “uncovers” organization that are “abusing” its “Academic Volume licensing”. First of all, at the minimum, it had not done its due-diligence when it sells them the license in the first place. Second, this “academic licensing” thing is usually a mud, not only with Microsoft but everywhere. A lot of companies, and I am not saying Microsoft did it, use this as a “discount” for certain segment of the industry in an effort to gain a foothold. What we do see later is the company then yank the “discount” when they feel secure in the industry. I see it at least once before. In that case, the company took on something bigger than they can chew. It backfired quite badly, though not catastrophically for the company.

It is still nice to see that Microsoft, despite its size, cannot take on any industry on its own. This is important because if Microsoft cannot, then others will not stand a chance. It is trying to manage this by re-segmenting the market by creating a new segment called non-profit.

Who’s right and who’s wrong in this saga? I don’t think we can draw a bright thin black line. It is going to be so grey and large that it is a judgement call and depends on the criteria you use.

In all, just like what happened to the company I allude to earlier, bad publicity for Microsoft.

Good publicity if you are an advocate of no vendor lock-in, like me.

October 27, 2008

Element vs Attribute

Filed under: Uncategorized — ctrambler @ 1:25 am

In his post, “Old wine in new skins“, Patrick Durusau tries to engage us on one age-old discussion and something that still bugs XML designers today: When to use attribute or element. He framed it as one tenant of  the ODF vs OOXML beauty contest. My main objection with OOXML syntax style is not “attribute vs element” but the unnecessary pollution of implementation detail which makes it difficult to read OOXML. Nonetheless, I will bite.

Firstly, I will like to complain about his definitiion of “semantically correct”. To me, semantic correctness in XML means nothing. It is really easy to form a semantically correct XML syntax, especially if you are free to do whatever you wish.

The point with XML design is not to achieve semantic correctness, but to say what you mean in a logical way with minimal fuss, greatest clarity, easily read by machine and human, and finally, as much as possible, free from implementation details. To have implementation details leaks into the XML is unavoidable, but its effect can be minimized. Lets take the example of me and my dog. It is semantically correct to implement “I->own->dog” or “dog->owned_by->me”. If you are running a database that reunits missing dogs based on their name on their dogtag, then the second will be more appropriate since the information you have is the dog’s name and your search will necessary start by identifying dog with the given name. This means you will be looking at dog’s name more frequently than owners. Using the second scheme means you simply look at the top level elements, but with the second you have to navigate down one level to fetch the dog’s name, which is an extra, probably unnecessary operation. To choose the second system instead of the first is definitely an “implementation leak” but it is unavoidable. It will indeed be stupid to insist on the first, eventhough for a human, it is a more natural way of representing owner-dog relationship. It is however, incorrect to capture the implementation detail as dog->owned_by->pointer_address(0x0002a)->me simply because you used a pointer to match my dog to me.

Now, which is easier to read?

<owner name=”ctrambler”>

<dog name=”doggyRambler” />

</owner>

Or

<owner>

<name value=”Rambler” />

<dog>

<name value=”DoggyRambler” />

</dog>

</owner>

Both are sematically correct. Both represents the same thing, i.e. “I ->own->dog”. I am sure at least 90% of people will say the first one communicate better than the second. Therefore, attribute wins hands down.

More importantly, from a technical point of view, the first one is potentially more cost effective. Say I want to search for my dog’s name in the database and I am already at the correct “owner” element. Taking the simplest case of one dog per owner, but bearing in mind that I might store other elements or attributes as well, let’s see what is the potential cost scenario: In the first case, search for element “dog”, then search for the “name” attribute for dog, i.e., two searches. The second scenario: search for element “dog”, then search for element “name” before finally searching for “value” attribute. Three searches. Note that in the second case I will have more elements than the first, since what is normally attributes become elements. This is important because searching for element might not as efficient as searching for attribute. XML Parsers normally keep attributes together with the element, but keeps elements separately. This is a common design trick for parser that has to optimize for memory and have to store data on disk caches.

I know a good XML parser, created to parse specific XML syntax, can be optimized to remove a lot of cost associated with element access. However, this is a luxury affordable only by those who writes application that depends extremely heavily on one XML syntax. The rest, including me, will have to depend on general XML parser where element-element navigation is likely to cost more.

Also note that, it is semantically correct if you have two “name” elements but it is not for two “name” attribute. To limit “name” element to one you need a schema. In short, you need to apply a secondary mechanism to limit your XML syntax with the “name” element. Do you really need this extra complication or should you use inbuilt XML rule to enforce one “name” only?

If this is not enough to convince you that attribute is the better approach here, lets look at another example:

<myelement name=”e1″ restriction=”restrictionA”>

<myelement name=”e2″ restriction=”restrictionB” />

</myelement>

Or

<myelement name=”e1″>

<restriction value=”restrictionA”/>

<myelement name=”e2″>

<restriction value=”restrictionB”/>

</myelement>

</myelement>

In which version are you more likely to be certain that “restrictionA” applies to element with the name “e1”? It is more of a judgement call but I believe more people will think the example where “restriction” is an attribute is more likely to imply this. We are predisposed into thinking the attribute is a property of the element and definitely applies to this element and may be its children. On the other hand, a child simple says to us that it has a relationship with its parent and its own children but nothing else. The “restriction” element can equally says the restriction it carries apply to the parent or itself or its children or indeed any combination of the three with no preference at all.

So, which is better, element or attribute?

Before I conclude this post, let me assure you that the decision on choosing to use element or attribute to represent something is normally not so straightforward, especially the data you want to capture is not a simple piece of data. Take for example if you want to capture my name in two parts: first name and family name. In this case, the advantage of using attribute is somehow diminished from a design point of view.

In Durusau’s post, he used a simple boolean attribute as an example, therefore I had chosen to treat “name” as a simple property in the example.

October 21, 2008

Interoperability at different levels

Filed under: Uncategorized — ctrambler @ 12:13 pm

Alex Brown has a write up on interoperability. A good write up. It starts with a general description of application interoperability and progressively narrow it down to document format.

He concentrated on what Microsoft can or cannot do but neglected to say what others have to do to make it happen. They can help, or throw a spanner into Microsoft’s hard work. That is the only major criticism I can level at him, i.e., an apparent failure to mention interoperability is mainly a two way street. If the other party does not want it, there is just so much one can do before hitting a brick wall. More on this later.

One other minor thing I disagree is the characterization that ODF 1.0, ODF 1.1 and ODF 1.2 are different standards. The latter two are evolution of ODF 1.0 in OASIS. The saying that OASIS ODF and ISO ODF is different is fair enough, I will accept it. I wanted to launch into a tirade of how little difference between the OASIS ODF and ISO ODF is compared to ECMA OOXML and ISO OOXML  when I first read it but when I cool down I decided the correct thing is to accept them as different for one important reason: OASIS ODF had moved on to 1.1 and major vendors are supporting, or plans to support 1.1 which is definitely different from ISO ODF.

I believe this line of attack is also a bit unfair. Microsoft definitely is cooking Microsoft HomeBrew OOXML version 1.x. Since this is done in private, the same way most proprietary standards are, we cannot point the finger specifically to a ISO/ECMA OOXML 1.x in the making. To point the same finger at a new version of ODF (version 1.2) in the making as a “new” standard simply because it is done in the open is unfair. If I am critical, the existence of a maintenance regime means SC34 is brewing ISO OOXML version 1.x

[When I wrote the last paragraph I realize one potential significant difference between ODF and OOXML in terms of the maintenance region is emerging: ODF evolution is going to be done in OASIS, bar a successful snatch by SC34 of ISO to bring it “in house”. This already means ODF 1.1 is not an ISO sanction version and the same can be said for other ODF versions, unless it is brought back into ISO. With OOXML, since SC34 is said to be in charge of maintenance, it can claim it is ISO sanctioned all the way through. This can open up a lot of FUD. In real life, I do not think there is going to be much different coz we can still rely on the professionalism of JTC1 committee to not put unnecessary hurdle if OASIS come back for ISO backing for new ODF versions]

Back to interoperability. As I had said, it is a two way street. However, in the past and in the present, they seems to be one way street. Before ODF, Microsoft does not want to cooperate. It unilaterally published its own format and let others scrambled to interoperate with it. After Massachusetts deliver the shock to this way of working, and as this debate become more and more interesting, we see a shift in balance-of-power where now it is the ODF vendor camp that refuses to support ISO OOXML [I don’t count one-way conversion, i.e., OOXML to ODF but not vice-versa as interoperable]. The claim of “interoperability” is hollow, from a document format point of view, if only one application, i.e., Microsoft Office can read/write both OOXML and ODF natively [caveat: This had not happened yet]. Did Microsoft know this? Yes, at least a year ago. Why else would they stipulate in the contract with Novell that Novell has a deadline to deliver OOXML write ability in OpenOffice.org and deliver it in their SuSE product?

Astute reader will realize that when I discussed interoperability above, I focused on application, not file format. From a user view point, there is no difference between application level interoperability and file format interoperability. From a technical view point, there are. Being a file format expert, Brown would naturally prefer to see interoperability at the file format level. It has its merits, chief among them, you have a formal, one-to-one mapping from one format to another. I, however, is not sure this is needed. The fact that both represents the same thing at different level of details means we can never achieve one-to-one mapping. Best way to give maximum chances of a document looking the same is not to cross boundary (OS, application, file format) if possible, once you cross it, be prepare for some incompatibility.

Moreover, from a user point of view, it is the asthetic that counts and asthetic is delivered by application mainly, not document format. If the application I use is only capable of positioning my table to an accuracy of 1 cm I am bound to find that my document look different in that application and also being saved differently. I rather leave file format conversion to the application.

October 17, 2008

If ISO OOXML format is not published, then ISO fails to provide a level playing field

Filed under: Uncategorized — ctrambler @ 7:57 pm

As far as I can tell, it has not been published yet. The best you get is the leaked version from NoOOXML.org. It’s a shame that you have to resort to piracy if you want to implement an ISO Standard. And isn’t it ironic that you get it from OOXML opponents? It’s understandable why people involved in the standardization committee and ISO is not happy about the leak. However, it is their inaction that forced people to be pirates. A lot of people, including me, will says this failure to publish is in violation of ISO rules, and that this lengthy delay in publication is now so serious that it alone can and should result in the standard being revoked even if ISO’s big guns does not think it is necessarty to do so for the same reason when they choose to reject this line of appeal against OOXML.

I am the first to admit that I am one of the millions who do not need to read the published standard. However, publication of the standard is important. Why? First and philosophically speak the most important point is publication of standard is the purpose of standardization at ISO. ISO is there to set standard. Setting standard means publishing them. Not publishing it means no standard. If no standard, why talk about it in the first place?

Again, philosophically speaking, no publication means ISO failing its second aim: widespread dissemination of the standard.

Technically speaking, without publication, then we cannot have any application that support it. When there is no published standard to evaluate an implementation, there is no standard compliant implementation.

However, in real life, the most important thing to note that the delay in publication has unjustly penalized others who rely on the standard publication to work on their own product that support the standard. Without publication, only those involved in the process has access to the document, and ONLY them can start working on the implementing the standard. All others have to wait. What does this means? They have a leg up against the rest of us. This is unfair. While those in the known can work on creating an implementation of ISO OOXML and blogging publicly about how ready they are to support it (“We will add support for IS 29500 as soon as the standard is made public “), others are still waiting for the standard to be made available before they can create an implementation to compete with it. Note that I am not picking on Microsoft, as the same applies to those who has the privilege of receiving a copy of the publication. This includes IBM [Rob Weir publicly admitted he received a copy of the standard. 😉 (Sorry but I cannot find the link to his blogpost where he admitted this)]. Is this fair?? I need to commend (no, I am not being sarcastic here) Microsoft for the courtesy of not supporting ISO OOXML before its publication. However, it is unacceptable that ISO is instrumental in perpetrating this unfairness.

I know that if one work on ECMA OOXML standard then the jump to ISO OOXML is a, relatively speaking smaller step, and that most implementers would had follow this route. It is still unfair. If only for appearnce sake, ISO should been seen to try it utter best to establish a level playing field for its standard. Most importantly, the BRM proceeding/finding and decisions are not published in sufficient detail to “reconstruct” the ISO OOXML standard. There are quite a lot of changes, including non-trivial ones introduced in the ISO process (see this post from Brian Jones) that we don’t know about, such as how does ISO OOXML support ISO dates? Not in ECMA or ISO committee? You cannot get your hand on these information.

Yes I know, if you make a fuss, like Rob Weir did, they will send you a copy. But the point with standardization is you don’t need to make a fuss.

In case ISO big guns need to be educated on why it should insist on following the “one month” publication rule, I hope this blog post teaches them something.

October 16, 2008

Apple’s strength: PR

Filed under: Uncategorized — ctrambler @ 2:33 pm

Mac Unibody design: carved out of a single piece of aluminium. Wow!

Or is it?

Examine it deeper and you will find that there are a lot of computer equipment out there that were carved out of a single piece of aluminium. What we have here is an application of an existing technology but with a skillfully managed PR campaign to publicize it.

With Apple, carving things out of aluminium suddenly become sexy. Frankly, I don’t think the inventers ever thought they might see this day.

Apple Tax? Was it?

Filed under: Uncategorized — ctrambler @ 2:29 pm

Mary Jo Foley blogged about Microsoft’s effort to counter Apple’s MacBook event. I don’t think that earn Foley any brownie points with Microsoft for publicly disclose Microsoft behind-the-scene PR effort. I am expecting such a move from any of Apple’s competitor. That event was too well publicized. So far, we only know Microsoft did something. Others might had done the same but we do not know until someone disclose it.

What is really interesting is later in the blog post, Foley mentioned that Microsoft is coining the word “Apple Tax” and is using it as a marketing strategy. Naturally that got me interested. The word was reminiscene of “Microsoft Tax”, a word coined by others to mean the royalty paid by computer maker to Microsoft on every PC shipped, regardless of whether Microsoft products was installed in it. [It is possible since this make accounting easier, but unsubstantiated]

I was disappointed when I see Microsoft coining this as one having to pay more to get applications/hardware to run on a Mac computer. To be fair, in the interview (see the second link) Microsoft acknowledge that it is a “choice tax”, i.e. one have to decide whether to pay it by buying a Mac. In effect, Microsoft is using the word “tax” in a way we call VAT a tax, i.e., you have a choice of not buying a product and therefore not incur the tax. In this sense, everything tax can be avoided. However, my view of tax is more rigid, i.e., you practically have no choice but to pay it because you have to do the thing that is “taxable”. If you do not earn an income, you don’t have to pay income tax, but do you realistically have a choice of not earning an income? The “Microsoft Tax” is one such tax since if it exists, virtually everyone cannot avoid it. The “Apple Tax” is not, since you can do what Microsoft suggests, i.e., buy non-Apple computer.

The most damning criticism of this argument is, in this day and age, its not true anymore.

It is important to see that the context of this arguement is Desktop computer, so there is no point arguing about servers.

Let’s take it apart.

For applications, the cost of getting applications on Mac to fulfill my daily need is zero. Its the same for Windows. How do I do it? Running free and open source software such as OpenOffice.org and GIMP. As I took application out of the computer purchasing decision loop, it is now a head-to-head competition between the various operating systems. With this, we are looking at the value proposition of the various OS. I could choose Linux, then everything is free, but I choose a Mac. Why? In my case, because OSX offer better application integration and I appreciate it. I don’t have to remember whether is it “Ctrl-C” or “Shift-Ctrl-C”. Then there is a second thing call ease-of-use. In all three OSs here we get a simple jukebox for our audio-visual file, simple text editor etc. In all of these, Mac is usually the favourite.

Hardware upgrade more expensive in Mac? First and foremost, how many people actually upgrade their computer? However, there is some truth in this, especially if you want to buy it from Apple. But I am connecting my generic mouse, USB hard disk drive, standard memory modules etc to my Mac had find that they work.  Why? Everyone in the hardware business, SUN, Windows, Mac, Dell etc are using the same set of hardware specification and probably sourced from the same supplier. If you buy a part from Apple for your Mac, you expects to pay more for some dubious guarantee that the parts work. You do the same if you buy a part from SUN or Dell for your SUN workstation or Dell computer. Your selection of hardware is only dependent on the software driver that gets your OS to speak to the parts you buy. It is true that Microsoft Windows has the best collection of software driver and therefore the widest choice. With Mac or Linux, you have to be more careful. However, in practice, the price range available to you is practically speaking, the same. Does Microsoft knows that? You bet. Nonetheless, it is a good bashing point when Apple (and just about any other manufacturers who makes laptops) wants to steer you to buy parts from them which they inflated the price.

October 13, 2008

Should schools that opt-out of Microsoft License keep the money?

Filed under: Uncategorized — ctrambler @ 11:53 am

An interesting debate pop up in New Zealand, should school that selects to use non-Microsoft products get the Microsoft licensing money earmarked for them to support their non-Microsoft products?

As if aimed at complicating the debate, the school proposed to share the benefit with other schools who uses Microsoft products by proposing to get only a portion of the money.

In terms of bureaucracy, accountability and technicality mainly aimed at preventing misuse, I can see why money marked with the words “Microsoft licensing cost” cannot be used for other purposes. However, as usual the counter arguement we should share in the proceed of any saving I can get for you. Otherwise, it certainly feels like being penalized.

Add to the mix the complication that NZ’s Ministry of Education says “state schools are free to choose their software provider”, which implies that MoE should support any “breakaway” school by funding them appropriately (read give them some money from the MS pot).

Moreover, the school wants to use the money  “to employ a local technician and further develop the Linux environment.” This complicate the issues because moving “MS licensing money” to simply generic “licensing money” is much easier than changing it to “technical support money”.

There is no easy solution to this. It looks simplistic but the consequence is difficult to predict. We know it will take up meeting time in the ministry to discuss this, and it will involve more paperwork for the ministry to reallocate the fund if it choose to. Since we heard about it, it means we have to add a public dimension to it. If Warrington School is successful, other schools will want to copy it and this can weakens MoE bargaining stick with Microsoft because the pot is smaller and this will inevitably cost other schools who genuinely wants Microsoft products more, creating dissentment. Not to mention the politicking and lobbying effort intensified from both pro- and anti- microsoft product front.

It’s a can of worms basically. One that in my opinion, has to be opened.

Another classic example of chaos theory?A simple flapping of butterfly wings in Otaga causes a hurricane in MoE ?

October 11, 2008

Dial down UAC? No… modify needed (and dial up)

Filed under: Uncategorized — ctrambler @ 5:26 pm

A lot of people, and apparently Microsoft agrees, that UAC should be dial down. Having experienced installing a printer driver in Vista, I beg to differ.

UAC need twicking. That is for sure. More informative messages and clicks? Certainly. Take the printer driver for example, it is a network-enabled printer. Click to approve printer driver installation? Yes. Click to unblock outward communication to printer? That I have a problem. First, I shouldn’t be asked to approve an outward communication. Second, the message was too generic one the line “Computer want to communicate to the world. Approve?” I am an IT savvy person, I can make the connection between the network-enabled printer and the computer, but the vast majority of others cannot make the connection. So, UAC need modification, such as removing unnecessary autorization and more informative messages, but this is not dial down.

However, UAC has a really bad flaw, i.e., when I click to approve driver installation when logged in as an administrator, I do not need to put my password in. That is wrong. It compromise physical security. The computer should ensure that nobody, except the administrators for the computer can install any software. This should be performed by challenging the person who wants to install any software, and not by relying on the current log in setting. Other operating systems (Linux, Macintosh) do this, and Windows should follow. This is the dial up that I want to see.

Next Page »

Create a free website or blog at WordPress.com.