Is "peer to patent" really "peer review"?
#1. What does it mean to have peer review of a patent?
#2. Is the peer to patent project actually about peer review?
Of the first, peer review, in the case of journals, involves a referee who has some knowledge in the field looking at a manuscript and determining that the conclusions presented are supported by the data, and that the data are acquired (and interpreted) in a scientifically acceptable way. If some relevant reference is missing, the referee will bring it up. It's a rare event when a journal referee says "all of this was done before" or "this is obvious in view of X." If peer review of patents were done in a way analogous to peer review of journals, this would be the approach. The question of "does it work?" [enablement] would be at the top, and "was it done before" [anticipation, obviousness?] would be at the bottom.
Of the second issue, the peer to patent project invites members of the public (experts in the field or not) to submit prior art for use by the Patent Office. The examiner takes the art and applies it to the claims. Because the examiner is doing the "review," it is not "peer" review. Because the role of the public is prior art (not enablement), it is not "peer review" as done in journals.
There is no harm in giving the examiner access to other sources of information. Whether it is effective depends on whether the prior art supplied by the public is MORE RELEVANT than what the examiner already has and on whether the examiner is going to be given sufficient time to review it.
In this, recall the absolute howling in the technical community about the Berkeley/Eolas patent that produced a re-examination ordered by USPTO Director Jon Dudas? Prior art presented by W3C, in the form of claim charts prepared by the well-known firm of Pennie & Edmonds, didn't even get to first base in the re-examination. In spite of all the complaints, Eolas/Berkeley won the re-examination and did not amend claims. If a well-articulated analysis such as this proved without merit, how frequently is John Q. Public going to hit a home run? Is the program cost-effective at the margins? Is the program going to change the world or is it merely a form of venting?
Back in February, I had brought up some OTHER issues with peer review of patents on techdirt:
I never did get a response.
A separate issue about the analysis of prior art by journals concerns what the journal does when there is a problem with the presence of prior art.
I had written
Wonder how Korea's Woo Suk Hwang pulled off one of the greatest scientific frauds of all time? The journals themselves bear some responsibility. First, they compete for the trendiest, cutting-edge research stories. Second, they tend to ignore conflicts-of-interest, as Science ignored the pending patent applications of Hwang and Schatten which existed at the time of article submission and which gave the authors a vested interest in seeing publicity of work which would financially benefit them. Third, once the article is on-board, the journals become stakeholders, very reluctant to accept criticism of an article published in their journal.
Of the third point, here's an excerpt from an article in Intellectual Property Today, titled "Zurko and the Optimization of Fact-finding: Who Can You Believe?," originally published in February 1999. (...)
see How did Woo Suk Hwang trick the scientific world about stem cells?
As important side point, one used to be able to find this article using a Google search of appropriate key words. As of May 19, 2006, this article cannot be found using Google. I don't mean it's on page 300 in a set of 10,000,000 results. I mean it's NOT FOUND AT ALL. It's not there now, but it was there in the past. As Boy George (Lieutenant Lush) said: You come and go.
Google can configure its search engine as it wants, but one needs to understand that search results of the "same search" change every day, not only as to search result order but also as to search result presence.