CyberTech Rambler

March 29, 2011

Bionic and GPL, and possible ‘hidden’ agenda?

Filed under: Uncategorized — ctrambler @ 7:02 pm

TheRegister got it right when it says most coders are not lawyers and vice-versa and the law is has shade of greys instead of being black and white. The question (pdf), raised by a lawyer named Edward Naughton, is whether Google’s clean up of copyrighted material in the kernel header files for Bionic is good enough to stop GPL from contaminating developers’ code. Google obviously thinks it is, but in the write-up, one can safely says Naughton begs to differ.

If there is doubt about whetheer Bionic headers are clean enough, pragmatic people like me will do what Brian Profitt does, i.e., simply go and ask the head of Linux Kernel, the very people whose copyright is ‘supposed’ to be infringed. If he says no, then I am in the clear, even if he says yes, I probably ask him is it possible to clean up the copyright material to his satisfaction. Regardless of the answer, I put him in touch with Google and let both parties work it out. I won’t go the whole nine yards of writing something up, whether it is  position paper, or email, or anything at all. But may be that is because I am a programmer. Other people, including lawyers and Google’s competitions and (dare I say it?) journalist might have a different view.

After reading the write-up, one thing is  clear. Naughton is not alleging that simply using the function declaration in header is infringing. In fact, there are plenty of evidence that using the declaration is not. My favorite is a nice judgement I see but forgotten where I had seen it where the judge, after declaring that function names are absolutely necessary to copy to emulate a function and therefore not protected by copyright, turn his attention to parameter names and say while the defendent’s use of the same parameter names as plaintiff goes beyond absolute necessity, but itself is not serious enough to constitute an infringement. He is, however, alleging that the arrangement of function names, the way the macro is written, and other things that appears on the header makes the header  copyrightable (I believe he is right on the money) and therefore, Google infringed GPLv2 and by the viral effect of GPLv2 and through Google’s misdeed, all developers using Bionic library will be contaminated  and the poor developers are subjected to GPLv2 freedom requirement (seriously incorrect).

One actually wonder why Naughton did not simply pick up the phone and ask Linus first ? After all, he (mis)quoted him in the write up [Full quote and mail here. To Naughton’s credit, he give the full link.].  His failure to investigate and realized Linus position as mentioned has yet to be explained.  By quoting Linus, it means he is aware of Linus’ status in kernel development, so why not drop him a line first? That’s common sense after all. If I think someone as big as Google infringes your code the first thing I do is to call you to see whether you agree or not.  Even if you think Naughton simply misread the email in the write up, it is a well known that Linus does not consider using the header files ‘as it is’ to write program that calls to the kernel using published method constitute derivative work and thus subjected to GPL. I choose to use the words ‘as it is’ carefully and intentionally.

But perhaps…. that is because I am a developer.

Putting on my lawyer hat, I suppose I can see him raising a nice discussion topic on copyright law and an important topic: With macros and other stuff on the header, is it possible that under copyright laws, the header is indeed copyrightable? Given that Linus don’t care, this is as best a academic question. It is a good point to  start a discussion among law students, or to use it to educate developers on matters of copyright. To cast this potentially letting loose the viral effect of GPL on developers using Bionic without consulting or informing the reader the position of the kernel development team is blatantly unfair. In fact, Prof. Nimmer, in his blog post, touch on the subject of what makes a header file copyrightabale in a way that make it no doubt that his intention is to educate the reader and simulate a debate.

So, after reading the write up, what do I think of it? I agree with Kuhn that it is a shame that it is written as if it is a complete analysis while at best it is presenting a hypothesis for further investigation. May be this is the way lawyers use to write thing. Ask your lawyer for his legal opinion on whether you can legally do this and that, this is the kind of write out they will give you. That is its fatal flaw of the write up. It is not a legal opinion but presented as such.

The best I can describe this write up is it is intended as an advertisement. It is written to showcase  Naughton’s skill and specialist knowledge on software and the law in the hope of attracting more business for the law firm. The presentation of the write up itself leaves me with no doubt this is at least one of the intention. I leave it to the reader to decide whether the choice of the topic is unfortunate, or that the PR people or Naughton choose the topic in order to generate the widest coverage.

Anyone paid for the work? I do not know. It is unlike the older days where people tell you the work is paid for by so-and-so. Simply because a write up looks like an advert for a law firm and a lawyer’s service might just be a ploy to disguise the fact that a third party paid for the work.

The connection to Microsoft? This is the technology conspiracy theory wet dream. I stress that I believe Microsoft is an extremely big company and therefore hires the service of a lot of law firms, and any law firms and lawyers will be proud to list Microsoft as  client. Thus, simply listing Microsoft as client on CV, website etc does not mean a  thing to me.

In fact, if you do have Microsoft as your client, you cannot hide it. Much had been made of the changes in Naughton’s CV just before news of his write up broke around 17th March. Comparing the current one with a Google cached copy on 8 March shows that there where three changes where two were simply the replacement of the word ‘Microsoft’ with ‘Fortune 500 company’. Conspiracy theorist will say Naughton is trying to hide the connection. May be, it is possible. It is also equally possible that Naughton, or someone in the law firm noticed it and in a futile attempt (despite knowing that Google around, it is going to be futile) to try to steer the story away from this Microsoft angle.

More  troubling is TheRegister’s chronicle on Naughton’s initial denial when asked whether he “worked for  anyone ‘involved’ in the situation”, only admit to had worked with Microsoft when the question was put point-blank  to him. His act of updating his CV to change the word ‘Microsoft’ to ‘Fortune 500 company’ in his CV implies very strongly that he knows his work for Microsoft is related to the situation. That childish attempt to sidestep the question initially reminded me of Bill Clinton’s answering ‘No’ to the question posted by Kenneth Starr on whether “He had oral sex with Monica Lewinsky” which is technically correct according to the definition of ‘oral sex’ defined for the lawsuit but is absurd because it implied Lewinsky was the only one having oral sex. Both are futile answers and the truth eventually come out. I don’t buy his argument that he is simply  following professional ethnics by trying to keep his relationship with client confidential because he had no problem flaunting it not once but twice on his CV until March 8th at least.

March 9, 2011

Nokia paid to use Windows Phone 7

Filed under: Uncategorized — ctrambler @ 2:48 pm

News story alleged that Microsoft paid Nokia to use Windows Phone 7.

No surprise for Microsoft watchers. Windows Phone 7 take up is, to my surprise, extremely low. It needs a big partner and Nokia is a big catch. Microsoft is not shy of using its financial might to help adoption of its technology. If you need another example, see the Microsoft-Novell agreement.

For Nokia, I see it as another bet-the-company movement. Unlike Marconi, whose bet-the-company moment (selling off its crown jewel (defense and wireless business to become a ‘communication’ company) flopped and leads to years of financial struggle. Nokia’s first bet-the-company moment (concentrating on mobile phones and not making toilet papers) paid off handsomely by turning it into a mobile phone giant.

A few years ago, Nokia is doing well in mobile phone business. The arrival of IPhone, and the rise of contract-manufacturers such as HTC to branded competitors means Nokia’s star is fading. To arrest it it needs some bold rethink and this is definitely one of it.

The bet is Windows Phone 7 will become popular putting Nokia in the position to ride the wave. So far, Windows Phone 7 adoption rate is far below expectation. That is what created the opportunity for a Microsoft payment and Nokia took it. Windows Phone 7, although still  rudimentary by smart phone OS standard and is still paying catchup, with Nokia’s expertise and Microsoft might, it can still be a competent competitor one day in the yet-to-be-decided  operating system wars for smart phone.

Like Microsoft-Novell agreement, this deal is more risky for Microsoft partners than Microsoft. Both deals make sense for the Microsoft partners. Whether it works for the MS partner depends on the skill of the MS partners to get the best out of it. In Novell’s case Novell probably did not get as much as it could, hopefully Nokia will fair better, much  better.

Patent war for VP8 heat up

Filed under: Uncategorized — ctrambler @ 2:33 pm

It is reported that The US Department of Justice is investigating whether the patent pool organization’s (MPEG-LA) effort to create a patent pool for VP8 fall fool of antitrust regulation.

On the surface, I find MPEG-LA’s effort a bit weird.  Patent pools are usually form by champions of a standard to make it easier for licensees to license the technology. Having a patent pool this way helps a standard’s progress through standardization body and make licensing the technology easier. However, the proposed VP8 patent pool by MPEG-LA is not initiated by the champion of VP8, i.e. Google. In fact, since Google wanted a royal-free standard for web multimedia, it is clear that Google is not interested in such a patent pool as the reason for creating such a pool is to collect royalty. Instead, it is formed by other parties not related to VP8. It is also unclear whether the patent pool is setup to oppose or to facilitate VP8’s standardization process. MPEG-LA’s stewardship of H264 (a rival to VP8), if any, points to it being potentially an opponent to VP8.

What MPEG-LA does is perfectly legal, but suspicious. I think exactly what MPEG-LA has in mind is what the Department of Justice wants to find out.  I cannot see Google not having a hand in  this. If I were Google, I would ask for an investigation. At a minimum, it send a notice to MPEG-LA that I am prepared to challenge you and investigate all your patent claims. It is also one of the few ways to force  the organization, in legally binding terms, to show their hand in this patent poker game. It is also a preemptive strike at MPEG-LA.

To an outsider like me, it tells me two things: One, Google is serious about pushing VP8 (WebM) standard and to keep it royalty-free, and two, Google is possibly feeling some heat from the patent pool.

No doubt MPEG-LA will say it is just doing what its member ask it to do. As an membership-led organization it is extremely likely it has to do what its members asked it too. Part of the purpose of MPEG-LA is to help keep its members at a distance from any criticism of patent misbehaviour. Some will call this doing the dirty work for their members. I particularly like MPEG-LA’s comment when TheRegister ask it to comment on the supposed investigation (see end of article one the first link). It says it never confirm or denies any government proceedings. That is not doing it any favour. Reputable companies and organization are often prepared to admit that a government investigation of this magnitude.

March 7, 2011

At least one problem with LSE’s trading platform is not Linux specific

Filed under: Uncategorized — ctrambler @ 6:52 pm

ComputerWorldUK has the story on what happened behind the scene of the recent London Stock Exchange chaos. At least in the occasion discussed in the article, the Linux-based trading platform is not the main culprit.

The article makes it clear that it is more an organizational problem. You can either blame LSE’s insistance on making change (now reverse) its  data vendors do not want, or blame the data traders for not prepared enough for the change. Either way, we still cannot rule out problems with LSE’s new Linux platform except to say that the article makes it look less likely.

On a previous post on the subject, Ian Easson commented that if LSE and Toronto Stock Exchange (TXE) merge, LSE might use Toronto’s platform. Certainly with the acquisition of Millenium IT and the rolling out of this trading platform, LSE has acquired the expertise to merge the two platform. The merged entity (if the merger happens) might ultimately use TXE trading platform, or use LSE existing platform or ditch both and start afresh.  I don’t think that both exchanges using the same platform will happen anytime soon. If any, the fact that a  transition to the new platform had recently happened, meaning both LSE and companies trading with LSE had recently put a lot of effort and money into the transition, means the momentum for moving towards TXE’s platform, or a new ‘better’ platfrom is not there. Even if LSE is prepare to book the new Linux Platform as a lose on its account books, others will not. Given that the others in this context managed to force LSE into a reversal of its closing price decision, I cannot see LSE force through a platform switch soon.

Like it or not, for better or for worse, LSE is stuck with the Linux platform for the next few years.

The more likely scenario, if the merger goes ahead, is interoperability of the two trading platforms will improved. Both exchange is probably keen on taking the opportunity to acquire  experience  of making the two platform interoperates smoothly because they can use the expertise when they acquire/merge with other exchanges. Another opportunity here is to gain experience on how trading platforms for two exchanges with overlapping trading times behave. Both are important factors  to consider and from an IT planning viewpoint, a better proposition than ditching one platform in favour of another without due consideration.

Taking shots at Oracle

Filed under: Uncategorized — ctrambler @ 6:27 pm

If you haven’t known it already, here is confirmation that a lot of people thinks that Oracle and Open Source don’t mix. IMHO, with the acquisition of SUN, Oracle had leapfrog Microsoft as Open Source biggest enemy.With Microsoft, we see only concerted barrage of marketing material and FUDs. With Oracle, we see good solid Open Source projects once spearheaded by SUN got sidelined or people losing confident in them. Microsoft is like enemy at the other side of the wall, Oracle is like the enemy within. The enemy within is the more dangerous one.

With Oracle, we finally can put the theory that Open Source software has a community behinds it and can survive beyond its main commercial sponsor. We are spoil for testbed to choose: LibreOffice, Jenkins, and GridEngine. What we need is time for this experiment to run its due course. So far, we only see the initial euphoria.

Not that Oracle cares about its reputation in open source. If I am frank, why should it? It is a company famous for making hard-nosed business decision and is thriving from its ability to make such decision.

The only thing that came slightly surprised is RedHat feels it needs to react to Oracle by making it harder for Oracle to repackage RedHat Enterprise Linux ad Oracle Unified Linux. Brian Profitt’s article put it into context why the slight change in RedHat distribution strategy (releasing a big tar ball instead of vanilla kernel plus patches)  means to developers’ universe (including Oracle’s). Quite simply, it is a nuance. Instead of giving you a step-by-step guide on what had changed, you have to find out what changes had been made by reverse engineering. It sticks to the letter of GPL but arguably not the spirit: Part or the point of Open Source is to make reverse engineering, the way you have to do with RedHat’s big tar ball, redundant. However, it must be said that RedHat still contribute changes ‘upstream’ (supplying changes back to the kernel development), which is the more important part compared to ‘downstream’ (make it easier for people to modify RedHat’s source).

The sad thing is, rather than RedHat getting all the blame, Open Source supporters like me simply accept it as a (almost necessary) consequence of Oralce’s bad boy behaviour: Another point against Oracle.

 

March 2, 2011

Library vs Publisher war starting?

Filed under: Uncategorized — ctrambler @ 5:54 pm

Got to give it to theRegister for the title “Library e-book too tatty to lend”

Bottom line: Your Library’s will not be able to digitally loan HarperCollins ebook for more than 26 times to reflect the fact that HarperCollins wants to emulate paper book replacement cycle. It claims that library normally has to replace a paper book after 26 loan because the book will become tatty after that, so why shouldn’t it emulate that on ebooks.

I don’t find the argument in HarperCollins OpenLetter persuasive. And no, I will not discuss that on HarperCollin’s preferred forum coz I don’t want my opinion to be subjected to potential censorship. While I subscribe to the argument that there is a need to support the publishing ecosystem, but I do not believe it has to be the current model that HarperCollin is trying to protect with this move.

Ebooks are different from paper books. One principle advantage is it cannot wear out, nor is it possible for libraries to ‘lose’ a book. To ask it to emulate paper book degradation from use is stupid. 26 loans is an arbitrary limits, driven more by the fact HarperCollins want to restrict popular titles circulation to one year than real usage. It probably did not take into account libraries are good at restoring books, using sellotapes or rebinding books.

Besides, what are we going to emulate next? Random delete of ebooks held by library to emulate book lost due to theft or users not returning books?

Blog at WordPress.com.