CyberTech Rambler

February 26, 2010

How much Microsoft is charging for its Novell SuSE subscription?

Filed under: Uncategorized — ctrambler @ 8:30 pm

The headline news is Novell’s Linux finally break-even, but the important information to me is the Novell finally come clean on how much it charges Microsoft for those SuSE coupons. You find that information eight paragraphs down.

Novell sells the coupons to Microsoft at less than half-price (45%) of its list price. That is serious bulk discount. This means Microsoft has extremely great flexibility to sell/dump the coupons. Microsoft is unlikely to be selling those coupons at the full list price. At that price, it would be stupid if it did. Selling them at 80% the list price, i.e., the price I expect it to, already generate a bumper profit of 35% for Microsoft and it has to do nothing. My hunch? If you are big enough Microsoft might charge you a symbolic one dollar for it, but it is likely that Microsoft will be palming it off for about 50% (the 5% extra to cover admin cost on Microsoft side). If I am looking at a mix Microsoft/Linux setup, that’s a bargain.

What is not good for Novell is it appears to be having problems converting those customers who cash in the coupons to stay with Novell. It is only charging 10 to 20% of the old list price for renewal. That is very low. Perhaps it is a reflection that once the conversion to SuSE is completed, the maintenance is relatively cheap for Novell to administer. Even then, I would expect the cost to be about 30% the current list price for new customers.

Shouldn’t Microsoft glean at Oracle returning to 1970?

Filed under: Uncategorized — ctrambler @ 8:18 pm

According to TheRegister, Microsoft’s Muglia chided Oracle for returning to 1970’s vertical integration space, calling it 1970’s hell.

That is strange, isn’t it? Microsoft should be gleaning at Oracle for about to make a really expensive mistake and vacating the market space for Microsoft product. Why take the bother of pointing out Oracle mistake?

Except that this is commercial speak trying to persuade customers to move away from vertical integration in favour of Microsoft’s less vertical version. What all these chiding does is to show that Oracle has a strategy that Microsoft took notice.

What Oracle is proposing makes sense. A Oracle database, in a vertical configuration, is a silo, and I quite like it as a silo. If your database is an Oracle database, it is extremely likely the data in it is critical to your business. You want to protect it as much as you can. Building it as a silo is one way to do it. It is also attractive way and provided the price is right, I want to be able to go to one vendor and say, sort whatever problem I have with my silo out and Oracle is offer this.

Almost all big database have its own database server that other application servers connect to via a network link to get/put/update its data. It is perfectly fine for the database server to run on a different operating system from your application servers, since OS is inconsequential as far data activity is concerned. In fact, I cannot see people ditching their current application server operating system to match that of their database server. If so, Oracle’s Unbreakable Linux would had sent RedHat packing.

The vertical integration approach is fine and attractive. This spooks Microsoft.

February 24, 2010

Everything carries risk … is just how you (mis)manage it

Filed under: Uncategorized — ctrambler @ 9:47 am

I think this qualifies as knee-jerk reaction: The two Flints of Forbes asks car manufacturers to reconsider drive-by-wire, following the Toyota recall problem. That is not to say they raised a important point, i.e., the ability to trace back to find out what the problem is. However, they should call for better audit/traceability capability to be built into these systems, not asking for them to be taken away.

Drive-by-wire brings risk, but also benefit. Early adopters will face problems, but that is part of the risk and fun of being an early adopter. Like  any technology, electronic or mechanical, drive-by-wire is getting into the main stream, its heavier use will expose problems, as probability becomes statistical certainty. It is unfortunate that engineers have to learn via mistakes. They are difficult and painful lessons. Quite a few will have serious adverse impact on individuals. The only solace we take, and the only responsible way of dealing with their tragedy, is to make sure the sacrifices they made is not in vein, i.e., we use  those tragedy as an impetus to make sure the same thing does not happen again.

No risk equals no gain. Pioneers of air transport sacrifices a lot, including their lives. Large scale air tragedies had occurred before, and though less-frequently, still occurs today. However, we learned from those lessons. If we adopt what the Flints proposed, we will probably still be relying on surface transport only.

Personally, I hate to put a computer in charge of my engine, may it be steering or engine management. I see it as unnecessarily complicate the whole engine environment. Seeing technicians bring a laptop to hook up your engine instead of spanner brings a chill down my spine. However, imagine the benefits it brings: Smoother rides, better emission controls and other, not yet imagined benefit.

In the case of drive-by-wire, I see it as a necessary step to bring the autonomous navigation technology into cars. It supports the development of  the technology by creating a market for which the technology can advance, and I look forward, if it is in my lifetime, the day where I can trust my life to a computer to safely deliver me to my destination in a way that is safer than any human driver can. The Toyota problem gives engineers the jolt needed to remind them that they are not infallible, and while it certainly bring home the shortcoming of technology, and perhaps (rightly) setback the technology, I hope it will not develop to a big problem the way GM crops did in Europe.

February 15, 2010

This would not had happened with Microsoft, or RedHat, or SuSE

Filed under: Uncategorized — ctrambler @ 5:57 pm

Ubuntu dropped the proposal to replace OpenOffice.org from its netbook edition, and did it in super-fast speed, i.e., less than two weeks!

Never before we see a commercial vendor discussed a decision on whether an application should be dropped or not, and to reverse the decision because of community rejection of the proposal. What we normally see is they make the decision, and where there is a outcry, hide being the shield named ‘our customers demand it’, implying “We know better, we did all the research and  you complaining lots are the minority’, regardless of whether it is true or not.

Interestingly I cannot see this happen with any of the major vendor in Linux, e.g. RedHat or SuSE as well as Microsoft. Their machinaries do not seem to be able to react to community outcry. If ever reversed, we have to wait years for it to happen.

Ubuntu handled the community outcry well. I call this user involvement. OpenOffice.org tried user participation with a call for UI Designs Proposal. It is a success. However, it generated a storm in a tea cup on why the OpenOffice.org people choose one thing over another. For example, why is menu and toolbars still on the top rather than on the side where there is more space? To me, those type of decisions are that for the design team. In the end, they finally explained their decision, and for me, their reason in this case was solid: People expect them to be on the top, so we decided to keep it there. Good enough for me.

February 10, 2010

In defense of scientists

Filed under: Uncategorized — ctrambler @ 2:22 am

Who am I to argue with a professor? In the Guardian, Professor Darrel Ince wrote an excellent article on the problem with not releasing the source code of scientific program for public scrutiny. Being a person working in an academic environment but in a scientific support role in a non-engineering department I can say I share his view. However, probably because I am  on the lowest possible rung of the ladder, i.e., not even on the ladder, I feel that I need to bring you the readers, the the scientists’ viewpoint.

Why didn’t those scientists release their code in open source fashion? The primary reason is they want to commercialize their product. Luckily that thinking is shifting now. Take one example I am closely involved in for example: It took two persons, a researcher that the professor respect a lot, and me, someone he trust on the computing side, that releasing the software under the General Public License will not decrease the value of the software. The type of work we do means software is simply a tool. What they are selling is expertise in the field. Scientific software are not your run-of-the-mill photoshop wannabe. It crunches data, nothing more. Therefore, as a scientist, at least one that is serious, what you want to know is whether you used the correct model (as embodied in the program), and whether are you asking the correct question. For example, if you ask the computer and the computer replies that colour A is not brighter than colour B, you cannot reach the conclusion that colour A is dimmer than colour B. This are not information you get from journal papers, and once you learn it, you can choose whatever software you are using. And looking at the field, any field for that matter, the definition of scientific research means there is not many people that has the knowledge in this world. Therefore they need support. This means then they should concentrate on selling that precious commodity which is inexhausible. The software is merely a demonstration of their competancy. For that, they should disseminate it as wide as possible.

Second, experience show that putting software out as open source does not really help improve quality. First problem: no eyeball to look at it. The number of people who has the necessary expertise to vet the software is simply not there. Fellow researchers (read competition)? No. They probably do not know how to program in the language you are programming in. Even if they do, they do not have the incentive to do so.

Third, scientists are not judged by the quality of the program or the program themselves, but how the bring new information to their field of interest.

Fourth, one key way  with acadamic software quality assurance is number of time the software is used to process different data. They rely on the fact that the more times the software is used, the more likely that the bugs are found and quashed. That is why instead of saying “more eyeballs make bug shallow”, I say “more data make bugs less likely”. Using this system, being open source is not a necessary attribute.

Having paint a bad picture on the programming front,  I think I need to aswer two questions: First, is releasing software as open source important and second, is there anything we computing professional can do?

The answer to both is yes.

The reason for the answer yes to the first question is mathematical equation alone is insufficient to describe what they really did. They made assumptions, managed outliners and borderline cases without making those decisions clear in the article. There are things that were accepted in journals that a more computing-based journal will reject as insufficient information. A few times when I have to rewrite someone’s algorithm, I have to refer to and infers their decisions on outliers and borderline cases by reading the codes. A few times, I come across construction of the software that will fails because it relied on perculiarity of particular system. Luckily, most of the time, it just increase noise rather than invalidating the work.

The answer to the second question is a definite yes but we need to weave in advantages that they can see.

I do this by slowly move them to adopt modern software practise, like reusing pieces of software. Academic/scientific software tends to be silo in themselves, even between parts of the same software. A lot of scientists, when needing the same function that they have in another part of the software, will prefers to duplicate the code instead of working to make sure that the same function can  work on both part of the software. We know this is just storing problems for the future. However, there is no point pointing this out to them. The way I sell it is to show them how reusing software make sense and make their program more robust. I show them that by sharing the function, it creates incentive to improve the function, and ensure that the improvement is propagated to all users. In the long run, it actually shorter their programming cycle, making them more competitive  as they build up a library of tried and tested functions and find that they do not have to recode anything.

So, in short, things are changing, for the better.

Microsoft squandered opportunity? or Not?

Filed under: Uncategorized — ctrambler @ 1:25 am

In an op-ed contribution in New York Times timed to ride on the iPad news, a former Microsoftee lamblasted the company for lost opportunity in the past decade. Well, we get those articles from time to time. This time it is different, Microsoft felt compelled to write a rebuttal.

Comparing the strength of both articles, I think the Dick Brass (the former Microsoftee) scored a bigger goal with the rebuttal. Microsoft did not dispute his event, instead, they tried the sledge of hand, i.e., asking you to consider other surrounding the issues. Don’t worry about the slow speed of bring ClearType to market, but consider it’s scale and impact on the market. The question is of course, why don’t you achieve both? Surely bring it to market earlier increase its scale and impact?

Brass use infighting inside Microsoft as the reason for the lack of innovation. While there are some truth in it, I don’t know how this compare with other similar sized company. Do we see it at IBM? I don’t know as nobody has yet to spill the beans.

What is important then? Learning from experience. Companies like people make mistake. The key is to learn from the mistake. That advise is not pointing to Microsoft alone, but to you and me.

Oouch… That’s painful

Filed under: Uncategorized — ctrambler @ 12:58 am

Last week, the net is buzzing with news that an Australian judge handed movie company their most humilating public defeat yet. If you think the movie company gone too far, and relish the opportunity to see a judge side with ISP, iiNet, to say that they don’t have to pass down infringement notices, then sit down and enjoy Ars Technica’s coverage of the decision.

I don’t know anything about how long an Aussie judge normally write in his judgement, but I think 200 pages is really really long. Having said that, I think I am going to sit back and read it. As for what the next step is, you know that the movie companies are going to appeal. Will the judgement be reversed? I don’t know. If it does not, there will be another defeat for movie companies and that will have a big impact on their anti-piracy strategy.

Aussie Communication Misnister acted responsibly and stay above the squabble. He only urge the two sides to discuss the issue further.

Reading Ars Technica article for background information, you can see that a lawsuit is inevitable. First, the movie companies set themselves up by flooding ISPs with requests to pass on dodgy  copyright infringement notices to their customers. They already implied that any ISP who refused will be dealt with. With such a big company refusing to toe the line, they will have to make an example of it. Even if they fail,  they figure and calculated that showing they are willing to follow up on the implied threat will scare others into re-considering their request. They probably did not expect a 200 page judgement and such a big negative publicity surronding the judgement.

iiNet paint a big target on themselves with their public defiance. So what do we have? The inevitable.

Now, lets wait to see what the appeal court will say.

February 3, 2010

Ah.. Almost forgotten about CrunchPad…

Filed under: Uncategorized — ctrambler @ 2:19 am

With the hype surrounding iPad, then the Microsoft proposed tablet called Courier, I had almost forgotten about CrunchPad, or should I call it Joojoo?

Engadget’s coverage of the CrunchPad saga, on this page, is quite good. I don’t sense any bias against both parties, but of course we have to always bear in mind that Engadget is a competitor of TechCrunch.

I refers you to two commentaries in particular, Engadget’s disection of the the first filing from both sides (TechCrunch’s and FusionGarage’s).It is good. The bottom line, which they hit on, is “Where’s the contract?”. To me, this reinforce the need for Venture Capital. They would had ensured that this will not happen and that everyone is clear on what the relationship is.

What do I think? As for the filings themselves, we see the standard legal posturing and maneuvers lawyers do in early stage of a lawsuit. Both sides will present their view in their favour. TechCrunch’s problem is no formal contract, FusionGarage’s is may be a formal contract is not really that important, but what both sides can show they put in to the project is. Who is right? who is wrong? Only a judge can sort it out.

Now that we have the iPad, it is the elephant in the room, as point out by Engadget. I don’t care how great Joojoo is, it may support 1050p video, it may be the next big thing not from Apple. But I prefer  the iPad, eventhough I can only get the most basic version. Yes, before you say it, I bought all the PR and hype Apple is throwing at me.

Create a free website or blog at WordPress.com.