This program is pretty slow. Thanks to the caching, repeated visits from the same address will only result in one lookup. Therefore large files will be processed proportionally faster than small ones. The load from this program is very light, because it spends most of its time waiting for the resolver. This also means that large files can take quite some time to process. The solution is to split the log file and run several resolution processes in parallel. This is done by the script splitwr: splitwr logfile > logfile.resolved webalizer logfile.resolved rm logfile.resolved By default, splitwr runs 20 parallel resolution processes. The number can be changed by editing the script. WWW: http://siag.nu/webresolve/ me Git repository'/>
aboutsummaryrefslogtreecommitdiffstats
path: root/java/jdk14-doc/Makefile
Commit message (Expand)AuthorAgeFilesLines
* Allow package buildskris2006-09-031-1/+1
* . Chase download location.glewis2005-01-121-2/+1
* Reset znerd's ports maintainership:hq2004-12-031-1/+1
* Using PORTDOCS macro.znerd2004-04-161-26/+12
* Fixed generation of pkg-plist.znerd2004-03-261-10/+8
* Added LATEST_LINK.znerd2004-02-101-1/+2
* Use the SORT macro from bsd.port.mk.trevor2004-01-221-1/+1
* . Update to the 1.4.2 docs.glewis2003-09-271-2/+2
* Clear moonlight beckons.ade2003-03-071-0/+1
* Fixed generation of plist file. The file was previously writtenznerd2002-11-211-0/+1
* Removed unnecessary PLIST_SUB setting.znerd2002-11-201-1/+0
* Upgrade to JDK 1.4.1 documentation. Automagically generatingznerd2002-11-201-4/+23
* Not using IGNORE anymore to avoid package building.znerd2002-10-101-5/+1
* Upgrade from 1.4.0-beta 3 to 1.4.0.znerd2002-05-24