本文只是简单的翻译,原文http://wiki.apache.org/lucene-java/ImproveIndexingSpeed

* Be sure you really need to speed things up.

Many of the ideas here are simple to try, but others will necessarily add some complexity to your application. So be sure your indexing speed is indeed too slow and the slowness is indeed within Lucene.

* 请确认你真的需要更快的索引速度

这里的很多想法都非常容易尝试,但也有一些会给你的程序带来额外的复杂度。所以请确认你的搜索速度真的慢到不能忍受,并且慢的原因的确是因为lucene。

*Make sure you are using the latest version of Lucene.

* 确认你在使用最新的Lucene版本。

* Use a local filesystem.

Remote filesystems are typically quite a bit slower for indexing. If your index needs to be on the remote fileysystem, consider building it first on the local filesystem and then copying it up to the remote filesystem.

* 尽量使用本地文件系统

远程文件系统一般来说都会降低索引速度。如果索引必须分布在远程服务器,请尝试先在本地生成索引,然后分发到远程服务器上。

* Get faster hardware, especially a faster IO system.

* 使用更快的硬件设备,特别是更快的IO设备

* Open a single writer and re-use it for the duration of your indexing session.

* 在索引期间复用单一的IndexWriter实例

* Flush by RAM usage instead of document count.

For Lucene <= 2.2: call writer.ramSizeInBytes() after every added doc then call flush() when it's using too much RAM. This is especially good if you have small docs or highly variable doc sizes. You need to first set maxBufferedDocs large enough to prevent the writer from flushing based on document count. However, don't set it too large otherwise you may hit LUCENE-845. Somewhere around 2-3X your "typical" flush count should be OK.

For Lucene >= 2.3: IndexWriter can flush according to RAM usage itself. Call writer.setRAMBufferSizeMB() to set the buffer size. Be sure you don't also have any leftover calls to setMaxBufferedDocs since the writer will flush "either or" (whichever comes first).

* 按照内存消耗Flush代替根据文档数量Flush

在Lucene 2.2之前的版本,可以在每次添加文档后调用ramSizeInBytes方法,当索引消耗过多的内存时,然后在调用flush()方法。这样做在索引大 量小文档或者文档大小不定的情况下尤为有效。你必须先把maxBufferedDocs参数设置足够大,以防止writer基于文档数量flush。但是 注意,别把这个值设置的太大,否则你将遭遇Lucene-845号BUG。不过这个BUG已经在2.3版本中得到解决。
在 Lucene2.3之后的版本。IndexWriter可以自动的根据内存消耗调用flush()。你可以通过 writer.setRAMBufferSizeMB()来设置缓存大小。当你打算按照内存大小flush后,确保没有在别的地方设置 MaxBufferedDocs值。否则flush条件将变的不确定(谁先符合条件就按照谁)。

* Use as much RAM as you can afford.

More RAM before flushing means Lucene writes larger segments to begin with which means less merging later. Testing in LUCENE-843 found that around 48 MB is the sweet spot for that content set, but, your application could have a different sweet spot.

* 在你能承受的范围内使用更多的内存

在 flush前使用更多的内存意味着Lucene将在索引时生成更大的segment,也意味着合并次数也随之减少。在Lucene-843中测试,大概 48MB内存可能是一个比较合适的值。但是,你的程序可能会是另外一个值。这跟不同的机器也有一定的关系,请自己多加测试,选择一个权衡值。

* Turn off compound file format.

Call setUseCompoundFile(false). Building the compound file format takes time during indexing (7-33% in testing for LUCENE-888). However, note that doing this will greatly increase the number of file descriptors used by indexing and by searching, so you could run out of file descriptors if mergeFactor is also large.

* 关闭复合文件格式

调用setUseCompoundFile(false)可以关闭复合文件选项。生成复合文件将消耗更多的时间(经过Lucene-888测试,大概会增加 7%-33%的时间)。但是请注意,这样做将大大的增加搜索和索引使用的文件句柄的数量。如果合并因子也很大的话,你可能会出现用光文件句柄的情况。

* Re-use Document and Field instances

As of Lucene 2.3 there are new setValue(...) methods that allow you to change the value of a Field. This allows you to re-use a single Field instance across many added documents, which can save substantial GC cost.

It's best to create a single Document instance, then add multiple Field instances to it, but hold onto these Field instances and re-use them by changing their values for each added document. For example you might have an idField, bodyField, nameField, storedField1, etc. After the document is added, you then directly change the Field values (idField.setValue(...), etc), and then re-add your Document instance.

Note that you cannot re-use a single Field instance within a Document, and, you should not change a Field's value until the Document containing that Field has been added to the index. See Field for details.

* 重用Document和Field实例

在lucene 2.3中,新增了一个叫setValue的方法,可以允许你改变字段的值。这样的好处是你可以在整个索引进程中复用一个Filed实例。这将极大的减少GC负担。
最好创建一个单一的Document实例,然后添加你想要的字段到文档中。同时复用添加到文档的Field实例,通用调用相应的SetValue方法改变相应的字段的值。然后重新将Document添加到索引中。
注意:你不能在一个文档中多个字段共用一个Field实例,在文档添加到索引之前,Field的值都不应该改变。也就是说如果你有3个字段,你必须创建3个Field实例,然后再之后的Document添加过程中复用它们。

* Re-use a single Token instance in your analyzer

Analyzers often create a new Token for each term in sequence that needs to be indexed from a Field. You can save substantial GC cost by re-using a single Token instance instead.

* 在你的分析器Analyzer中使用一个单一的Token实例

在分析器中共享一个单一的token实例也将缓解GC的压力。

* Use the char[] API in Token instead of the String API to represent token Text

As of Lucene 2.3, a Token can represent its text as a slice into a char array, which saves the GC cost of new'ing and then reclaiming String instances. By re-using a single Token instance and using the char[] API you can avoid new'ing any objects for each term. See Token for details.

* 在Token中使用char[]接口来代替String接口来表示数据

在Lucene 2.3中,Token可以使用char数组来表示他的数据。这样可以避免构建字符串以及GC回收字符串的消耗。通过配合使用单一Token实例和使用char[]接口你可以避免创建新的对象。

* Use autoCommit=false when you open your IndexWriter

In Lucene 2.3 there are substantial optimizations for Documents that use stored fields and term vectors, to save merging of these very large index files. You should see the best gains by using autoCommit=false for a single long-running session of IndexWriter. Note however that searchers will not see any of the changes flushed by this IndexWriter until it is closed; if that is important you should stick with autoCommit=true instead or periodically close and re-open the writer.

* 设置autoCommit为false

在Lucene 2.3中对拥有存储字段和Term向量的文档进行了大量的优化,以节省大索引合并的时间。你可以将单一复用的IndexWriter实例的 autoCommit设置为false来见证这些优化带来的好处。注意这样做将导致searcher在IndexWriter关闭之前不会看到任何索引的 更新。如果你认为这个对你很重要,你可以继续将autoCommit设置为true,或者周期性的打开和关闭你的writer。

* Instead of indexing many small text fields, aggregate the text into a single "contents" field and index only that (you can still store the other fields).

* 如果你要索引很多小文本字段,如果没有特别需求,建议你将这些小文本字段合并为一个大的contents字段,然后只索引contents。(当然你也可以继续存储那些字段)

* Increase mergeFactor, but not too much.

Larger mergeFactors defers merging of segments until later, thus speeding up indexing because merging is a large part of indexing. However, this will slow down searching, and, you will run out of file descriptors if you make it too large. Values that are too large may even slow down indexing since merging more segments at once means much more seeking for the hard drives.

* 加大mergeFactor合并因子,但不是越大越好

大的合并因子将延迟segment的合并时间,这样做可以提高索引速度,因为合并是索引很耗时的一个部分。但是,这样做将降低你的搜索速度。同时, 你有可能会用光你的文件句柄如果你把合并因子设置的太大。值太大了设置可能降低索引速度,因为这意味着将同时合并更多的segment,将大大的增加硬盘 的负担。

* Turn off any features you are not in fact using.

If you are storing fields but not using them at query time, don't store them. Likewise for term vectors. If you are indexing many fields, turning off norms for those fields may help performance.

* 关闭所有你实际上没有使用的功能

如果你存储了字段,但是在查询时根本没有用到它们,那么别存储它们。同样Term向量也是如此。如果你索引很多的字段,关闭这些字段的不必要的特性将对索引速度提升产生很大的帮助。

* Use a faster analyzer.

Sometimes analysis of a document takes alot of time. For example, StandardAnalyzer is quite time consuming, especially in Lucene version <= 2.2. If you can get by with a simpler analyzer, then try it.

* 使用一个更快的分析器

有时间分析文档将消耗很长的时间。举例来说,StandardAnalyzer就比较耗时,尤其在Lucene 2.3版本之前。你可以尝试使用一个更简单更快但是符合你需求的分析器。

* Speed up document construction.

Often the process of retrieving a document from somewhere external (database, filesystem, crawled from a Web site, etc.) is very time consuming.

* 加速文档的构建时间

在通常的情况下,文档的数据来源可能是外部(比如数据库,文件系统,蜘蛛从网站上的抓取等),这些通常都比较耗时,尽量优化获取它们的性能。

* Don't optimize unless you really need to (for faster searching).

* 在你真的需要之前不要随意的优化optimize索引(只有在需要更快的搜索速度的时候)

* Use multiple threads with one IndexWriter.

Modern hardware is highly concurrent (multi-core CPUs, multi-channel memory architectures, native command queuing in hard drives, etc.) so using more than one thread to add documents can give good gains overall. Even on older machines there is often still concurrency to be gained between IO and CPU. Test the number of threads to find the best performance point.

* 在多线程中共享一个IndexWriter

最新的硬件都是适合高并发的(多核CPU,多通道内存构架等),所以使用多线程添加文档将会带来不小的性能提升。就算是一台很老的机器,并发添加文档都将更好的利用IO和CPU。多测试并发的线程数目,获得一个临界最优值。

* Index into separate indices then merge.

If you have a very large amount of content to index then you can break your content into N "silos", index each silo on a separate machine, then use the writer.addIndexesNoOptimize to merge them all into one final index.

* 将文档分组在不同的机器上索引然后再合并

如果你有大量的文本文档需要索引,你可以把你的文档分为若干组,在若干台机器上分别索引不同的组,然后利用writer.addIndexesNoOptimize来将它们合并到最终的一个索引文件中。

* Run a Java profiler.

If all else fails, profile your application to figure out where the time is going. I've had success with a very simple profiler called JMP. There are many others. Often you will be pleasantly surprised to find some silly, unexpected method is taking far too much time.

* 运行性能测试程序

如果以上的建议都没有发生效果。建议你运行下性能检测程序,如 JMP。找出你的程序中哪个部分比较耗时。这通常会给你想不到的惊喜。

posted on 2009-08-13 16:06  Eric Yao  阅读(483)  评论(0编辑  收藏  举报