对象应该实现Writable
接口以便在 Hadoop 中传输时序列化。采取卢森ScoreDoc
以类为例:
public class ScoreDoc implements java.io.Serializable {
/** The score of this document for the query. */
public float score;
/** Expert: A hit document's number.
* @see Searcher#doc(int) */
public int doc;
/** Only set by {@link TopDocs#merge} */
public int shardIndex;
/** Constructs a ScoreDoc. */
public ScoreDoc(int doc, float score) {
this(doc, score, -1);
}
/** Constructs a ScoreDoc. */
public ScoreDoc(int doc, float score, int shardIndex) {
this.doc = doc;
this.score = score;
this.shardIndex = shardIndex;
}
// A convenience method for debugging.
@Override
public String toString() {
return "doc=" + doc + " score=" + score + " shardIndex=" + shardIndex;
}
}
我应该如何序列化它Writable
界面?之间有什么联系Writable
and java.io.serializable
界面?
我认为篡改内置的 Lucene 类并不是一个好主意。相反,拥有您自己的类,该类可以包含 ScoreDoc 类型的字段,并在接口中实现 Hadoop 可写。它会是这样的:
public class MyScoreDoc implements Writable {
private ScoreDoc sd;
public void write(DataOutput out) throws IOException {
String [] splits = sd.toString().split(" ");
// get the score value from the string
Float score = Float.parseFloat((splits[0].split("="))[1]);
// do the same for doc and shardIndex fields
// ....
out.writeInt(score);
out.writeInt(doc);
out.writeInt(shardIndex);
}
public void readFields(DataInput in) throws IOException {
float score = in.readInt();
int doc = in.readInt();
int shardIndex = in.readInt();
sd = new ScoreDoc (score, doc, shardIndex);
}
//String toString()
}
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)