一般作為服務端的應用,必須要有相應的日志,否則問題怎麼排查呢?
而日志怎麼列印,也是一個技術活。不然java中也不會存在N多廠商争相提供日志架構了!
而日志滾動則往往也是剛需,畢竟沒人能保證日志的量及可閱讀性。
1. 日志滾動實作思路
日志滾動實作主要有兩個大方向:
1. 讓應用服務自行列印,列印到哪裡也完全由應用決定!
2. 借助第三方的工具進行日志列印,這種一般要借助于控制台或者agent!
3. 讓日志架構提供日志滾動功能,自行管理日志;這樣做有個好處就是,應用自帶,無需外部處理。壞處就是要完全依賴該應用,會影響該應用的性能,且如果該應用存在bug,則功能就不敢保證了。(稍後我會以logback的日志滾動說明)
4. 借助第三方的工具進行日志滾動;這樣做的好處是滾動功能更獨立,對代碼無入侵,即使真的有問題,大不了把它幹掉也沒關系;另外,第三方工具不會因為應用本身的bug而導緻滾動異常,進而保證了有足夠的排查依據。(稍後我會以cronolog進行講解滾動實作);
2. logback具體日志滾動實作
1. 使用應用列印的方式:如logback的rollingpolicy,則自帶滾動日志功能!但是坑多!
1.1. 首先我們看下日志滾動的配置:(在 logback.xml 配置)
<!--輸出到檔案-->
<appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${log_path}/api.ln.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy" >
<fileNamePattern>${log_path}/api.%d{yyyy-MM-dd_HH}.log</fileNamePattern>
<!-- keep 10 days' worth of history capped at 8GB total size -->
<maxHistory>10</maxHistory>
<totalSizeCap>8GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%d{MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
這裡配置以時間為滾動标準,每小時滾動一次!最大保留10天日志,總共大小不超過8G。我們後面來看下他的效果!
1.2. 看下滾動代碼!
首先,日志滾動會有相應的線程一直在跑(不管是應用實作還是第三方實作都是這樣,否則怎麼随時檢測滾動時機呢)!
在 EventPlayer中,有個play方法,此時會決斷是否是 EndEvent, 如果是的話就會調用背景線程生成!
// ch.qos.logback.core.joran.spi.EventPlayer
public void play(List<SaxEvent> aSaxEventList) {
eventList = aSaxEventList;
SaxEvent se;
for (currentIndex = 0; currentIndex < eventList.size(); currentIndex++) {
se = eventList.get(currentIndex);
if (se instanceof StartEvent) {
interpreter.startElement((StartEvent) se);
// invoke fireInPlay after startElement processing
interpreter.getInterpretationContext().fireInPlay(se);
}
if (se instanceof BodyEvent) {
// invoke fireInPlay before characters processing
interpreter.getInterpretationContext().fireInPlay(se);
interpreter.characters((BodyEvent) se);
}
// rollingPollicy 在此處調喚醒
if (se instanceof EndEvent) {
// invoke fireInPlay before endElement processing
interpreter.getInterpretationContext().fireInPlay(se);
interpreter.endElement((EndEvent) se);
}
}
}
然後,幾經轉換,就到了Interpreter 了,這裡會做一個死循環,一直在監聽!
// ch.qos.logback.core.joran.spi.Interpreter
private void callEndAction(List<Action> applicableActionList, String tagName) {
if (applicableActionList == null) {
return;
}
// logger.debug("About to call end actions on node: [" + localName + "]");
Iterator<Action> i = applicableActionList.iterator();
while (i.hasNext()) {
Action action = i.next();
// now let us invoke the end method of the action. We catch and report
// any eventual exceptions
try {
action.end(interpretationContext, tagName);
} catch (ActionException ae) {
// at this point endAction, there is no point in skipping children as
// they have been already processed
cai.addError("ActionException in Action for tag [" + tagName + "]", ae);
} catch (RuntimeException e) {
// no point in setting skip
cai.addError("RuntimeException in Action for tag [" + tagName + "]", e);
}
}
}
最後,就會調用 RollingPolicy 的start()了,這裡是 TimeBasedRollingPollicy .
// ch.qos.logback.core.rolling.TimeBasedRollingPolicy
public void start() {
// set the LR for our utility object
renameUtil.setContext(this.context);
// find out period from the filename pattern
if (fileNamePatternStr != null) {
fileNamePattern = new FileNamePattern(fileNamePatternStr, this.context);
determineCompressionMode();
} else {
addWarn(FNP_NOT_SET);
addWarn(CoreConstants.SEE_FNP_NOT_SET);
throw new IllegalStateException(FNP_NOT_SET + CoreConstants.SEE_FNP_NOT_SET);
}
compressor = new Compressor(compressionMode);
compressor.setContext(context);
// wcs : without compression suffix
fileNamePatternWithoutCompSuffix = new FileNamePattern(Compressor.computeFileNameStrWithoutCompSuffix(fileNamePatternStr, compressionMode), this.context);
addInfo("Will use the pattern " + fileNamePatternWithoutCompSuffix + " for the active file");
if (compressionMode == CompressionMode.ZIP) {
String zipEntryFileNamePatternStr = transformFileNamePattern2ZipEntry(fileNamePatternStr);
zipEntryFileNamePattern = new FileNamePattern(zipEntryFileNamePatternStr, context);
}
// 預設會使用 DefaultTimeBasedFileNamingAndTriggeringPolicy 進行滾動
if (timeBasedFileNamingAndTriggeringPolicy == null) {
timeBasedFileNamingAndTriggeringPolicy = new DefaultTimeBasedFileNamingAndTriggeringPolicy<E>();
}
timeBasedFileNamingAndTriggeringPolicy.setContext(context);
timeBasedFileNamingAndTriggeringPolicy.setTimeBasedRollingPolicy(this);
timeBasedFileNamingAndTriggeringPolicy.start();
if (!timeBasedFileNamingAndTriggeringPolicy.isStarted()) {
addWarn("Subcomponent did not start. TimeBasedRollingPolicy will not start.");
return;
}
// the maxHistory property is given to TimeBasedRollingPolicy instead of to
// the TimeBasedFileNamingAndTriggeringPolicy. This makes it more convenient
// for the user at the cost of inconsistency here.
if (maxHistory != UNBOUND_HISTORY) {
archiveRemover = timeBasedFileNamingAndTriggeringPolicy.getArchiveRemover();
archiveRemover.setMaxHistory(maxHistory);
archiveRemover.setTotalSizeCap(totalSizeCap.getSize());
if (cleanHistoryOnStart) {
addInfo("Cleaning on start up");
Date now = new Date(timeBasedFileNamingAndTriggeringPolicy.getCurrentTime());
cleanUpFuture = archiveRemover.cleanAsynchronously(now);
}
} else if (!isUnboundedTotalSizeCap()) {
addWarn("'maxHistory' is not set, ignoring 'totalSizeCap' option with value ["+totalSizeCap+"]");
}
// 調用父類start(), 設定啟動辨別,不允許多次調用初始化
super.start();
}
// DefaultTimeBasedFileNamingAndTriggeringPolicy 的實作,設定類功能主要還是調用 TimeBasedFileNamingAndTriggeringPolicy 的方法,而其自身,則是處理一些異常情況,以及開啟一個 Remover, 供具體的實作調用
@Override
public void start() {
super.start();
if (!super.isErrorFree())
return;
if(tbrp.fileNamePattern.hasIntegerTokenCOnverter()) {
addError("Filename pattern ["+tbrp.fileNamePattern+"] contains an integer token converter, i.e. %i, INCOMPATIBLE with this configuration. Remove it.");
return;
}
archiveRemover = new TimeBasedArchiveRemover(tbrp.fileNamePattern, rc);
archiveRemover.setContext(context);
started = true;
}
// TimeBasedFileNamingAndTriggeringPolicy, 則實際處理日志的滾動邏輯了
public void start() {
DateTokenConverter<Object> dtc = tbrp.fileNamePattern.getPrimaryDateTokenConverter();
if (dtc == null) {
throw new IllegalStateException("FileNamePattern [" + tbrp.fileNamePattern.getPattern() + "] does not contain a valid DateToken");
}
if (dtc.getTimeZone() != null) {
rc = new RollingCalendar(dtc.getDatePattern(), dtc.getTimeZone(), Locale.getDefault());
} else {
rc = new RollingCalendar(dtc.getDatePattern());
}
addInfo("The date pattern is '" + dtc.getDatePattern() + "' from file name pattern '" + tbrp.fileNamePattern.getPattern() + "'.");
rc.printPeriodicity(this);
if (!rc.isCollisionFree()) {
addError("The date format in FileNamePattern will result in collisions in the names of archived log files.");
addError(CoreConstants.MORE_INFO_PREFIX + COLLIDING_DATE_FORMAT_URL);
withErrors();
return;
}
setDateInCurrentPeriod(new Date(getCurrentTime()));
if (tbrp.getParentsRawFileProperty() != null) {
File currentFile = new File(tbrp.getParentsRawFileProperty());
if (currentFile.exists() && currentFile.canRead()) {
setDateInCurrentPeriod(new Date(currentFile.lastModified()));
}
}
addInfo("Setting initial period to " + dateInCurrentPeriod);
computeNextCheck();
}
經過如上初始化動作之後,發現并沒有啟動相應的輪循線程,是以這個點也是超出簡單的認知了,不管怎麼樣,我們還要繼續的!我們先來看一下 RollingFileAppender 的 append() 邏輯吧,畢竟它才是log的接入口!
// ch.qos.logback.core.ch.qos.logback.core.rolling.RollingFileAppender, 其接入口為: UnsynchronizedAppenderBase.doAppend()
// ch.qos.logback.core.OutputStreamAppender
@Override
protected void append(E eventObject) {
if (!isStarted()) {
return;
}
// 調用 RollingFileAppender 實作
subAppend(eventObject);
}
// ch.qos.logback.core.ch.qos.logback.core.rolling.RollingFileAppender
@Override
protected void subAppend(E event) {
// The roll-over check must precede actual writing. This is the
// only correct behavior for time driven triggers.
// We need to synchronize on triggeringPolicy so that only one rollover
// occurs at a time
synchronized (triggeringPolicy) {
if (triggeringPolicy.isTriggeringEvent(currentlyActiveFile, event)) {
rollover();
}
}
super.subAppend(event);
}
其中,rollover()就是其滾動邏輯!
是以,看到了吧!這裡的檔案滾動,是依賴于外部寫入的,原因是為了寫入的線程安全,保證檔案的完整性!
換句話說就是,如果在滾動的這個時機,如果有外部寫入,那麼,檔案得以滾動,否則,不會主動滾動檔案!如果外部一直沒日志寫入,就不會存在日志滾動!
我們先來看下滾動的條件吧: triggeringPolicy.isTriggeringEvent(currentlyActiveFile, event)
// ch.qos.logback.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy
public boolean isTriggeringEvent(File activeFile, final E event) {
long time = getCurrentTime();
if (time >= nextCheck) {
Date dateOfElapsedPeriod = dateInCurrentPeriod;
addInfo("Elapsed period: " + dateOfElapsedPeriod);
elapsedPeriodsFileName = tbrp.fileNamePatternWithoutCompSuffix.convert(dateOfElapsedPeriod);
setDateInCurrentPeriod(time);
computeNextCheck();
return true;
} else {
return false;
}
}
如上判斷,即将目前時間與需要滾動的時間做對,大于滾動時間則傳回 true, 并計算出下次需要滾動的時間,備用!
接下來,我們看下,具體的檔案滾動實作!兩個主邏輯: 1. 将檔案更名滾動; 2. 重新建立一個新的目标檔案,以使後續可以寫入!
/**
* Implemented by delegating most of the rollover work to a rolling policy.
*/
public void rollover() {
// 此處lock為 ReentrantLock, 即是互斥鎖,隻能一個線程可通路!
lock.lock();
try {
// Note: This method needs to be synchronized because it needs exclusive
// access while it closes and then re-opens the target file.
//
// make sure to close the hereto active log file! Renaming under windows
// does not work for open files.
this.closeOutputStream();
attemptRollover();
attemptOpenFile();
} finally {
lock.unlock();
}
}
// 滾動檔案邏輯,調用設定的 policy 實作進行滾動,此處我設定的是 TimeBasedRollingPolicy
private void attemptRollover() {
try {
rollingPolicy.rollover();
} catch (RolloverFailure rf) {
addWarn("RolloverFailure occurred. Deferring roll-over.");
// we failed to roll-over, let us not truncate and risk data loss
this.append = true;
}
}
// ch.qos.logback.core.rolling.TimeBasedRollingPolicy rollover
public void rollover() throws RolloverFailure {
// when rollover is called the elapsed period's file has
// been already closed. This is a working assumption of this method.
String elapsedPeriodsFileName = timeBasedFileNamingAndTriggeringPolicy.getElapsedPeriodsFileName();
String elapsedPeriodStem = FileFilterUtil.afterLastSlash(elapsedPeriodsFileName);
if (compressionMode == CompressionMode.NONE) {
if (getParentsRawFileProperty() != null) {
renameUtil.rename(getParentsRawFileProperty(), elapsedPeriodsFileName);
} // else { nothing to do if CompressionMode == NONE and parentsRawFileProperty == null }
} else {
if (getParentsRawFileProperty() == null) {
compressionFuture = compressor.asyncCompress(elapsedPeriodsFileName, elapsedPeriodsFileName, elapsedPeriodStem);
} else {
compressionFuture = renameRawAndAsyncCompress(elapsedPeriodsFileName, elapsedPeriodStem);
}
}
if (archiveRemover != null) {
Date now = new Date(timeBasedFileNamingAndTriggeringPolicy.getCurrentTime());
this.cleanUpFuture = archiveRemover.cleanAsynchronously(now);
}
}
TimeBasedRollingPolicy 的滾動方式為,重命名檔案即可!即先擷取外部設定的主寫檔案,然後根據新檔案命名規則,生成一個新路徑,然後重命名檔案!重命名也是有些講究的,有興趣的同學可以檢視下其重命名的實作!
// ch.qos.logback.core.rolling.helper.RenameUtil
/**
* A relatively robust file renaming method which in case of failure due to
* src and target being on different volumes, falls back onto
* renaming by copying.
*
* @param src
* @param target
* @throws RolloverFailure
*/
public void rename(String src, String target) throws RolloverFailure {
if (src.equals(target)) {
addWarn("Source and target files are the same [" + src + "]. Skipping.");
return;
}
File srcFile = new File(src);
if (srcFile.exists()) {
// 如果目錄不存在,會先去建立目錄,是以你可以滾動到其他地方,而目錄位置則不用管(權限除外)
File targetFile = new File(target);
createMissingTargetDirsIfNecessary(targetFile);
addInfo("Renaming file [" + srcFile + "] to [" + targetFile + "]");
boolean result = srcFile.renameTo(targetFile);
// 對于直接重命名失敗,則會再次嘗試,如果在不同的分區,則會使用一次檔案複制的方式進行一次重命名,具體做法是,先把檔案copy到新位址,然後再将目前檔案删除
if (!result) {
addWarn("Failed to rename file [" + srcFile + "] as [" + targetFile + "].");
Boolean areOnDifferentVolumes = areOnDifferentVolumes(srcFile, targetFile);
if (Boolean.TRUE.equals(areOnDifferentVolumes)) {
addWarn("Detected different file systems for source [" + src + "] and target [" + target + "]. Attempting rename by copying.");
renameByCopying(src, target);
return;
} else {
addWarn("Please consider leaving the [file] option of " + RollingFileAppender.class.getSimpleName() + " empty.");
addWarn("See also " + RENAMING_ERROR_URL);
}
}
} else {
throw new RolloverFailure("File [" + src + "] does not exist.");
}
}
在做完日志重命名的滾動後,還有一個可能的工作,就是删除過期的日志!這個工作由 archiveRemover 來做,即之前在 DefaultTimeBasedFileNamingAndTriggeringPolicy 中建立的執行個體! 會調用其 archiveRemover.cleanAsynchronously(now);
public Future<?> cleanAsynchronously(Date now) {
ArhiveRemoverRunnable runnable = new ArhiveRemoverRunnable(now);
ExecutorService executorService = context.getScheduledExecutorService();
Future<?> future = executorService.submit(runnable);
return future;
}
在做删除過期日志時,會先擷取一個 ExecutorService, 進行異步删除, 而這個 ExecutorService 預設開啟 8 常駐線程,進行日志處理!
删除動作進行異步執行,進而避免影響業務執行!清理過程如下:
public class ArhiveRemoverRunnable implements Runnable {
Date now;
ArhiveRemoverRunnable(Date now) {
this.now = now;
}
@Override
public void run() {
// 先清除目前檔案,再根據設定的最大值,删除清單
clean(now);
if (totalSizeCap != UNBOUNDED_TOTAL_SIZE_CAP && totalSizeCap > 0) {
capTotalSize(now);
}
}
}
public void clean(Date now) {
long nowInMillis = now.getTime();
// for a live appender periodsElapsed is expected to be 1
int periodsElapsed = computeElapsedPeriodsSinceLastClean(nowInMillis);
lastHeartBeat = nowInMillis;
if (periodsElapsed > 1) {
addInfo("Multiple periods, i.e. " + periodsElapsed + " periods, seem to have elapsed. This is expected at application start.");
}
for (int i = 0; i < periodsElapsed; i++) {
// 此處會根據 maxHistory 進行 -1 後清除檔案,即: 隻會清理 periodsElapsed 次曆史日志
int offset = getPeriodOffsetForDeletionTarget() - i;
Date dateOfPeriodToClean = rc.getEndOfNextNthPeriod(now, offset);
cleanPeriod(dateOfPeriodToClean);
}
}
public void cleanPeriod(Date dateOfPeriodToClean) {
// 擷取需要删除的檔案清單,然後依次删除,如果檔案夾内的檔案全部被删除,則将檔案夾删除
File[] matchingFileArray = getFilesInPeriod(dateOfPeriodToClean);
for (File f : matchingFileArray) {
addInfo("deleting " + f);
f.delete();
}
if (parentClean && matchingFileArray.length > 0) {
File parentDir = getParentDir(matchingFileArray[0]);
removeFolderIfEmpty(parentDir);
}
}
// 按規則比對需要删除的檔案
protected File[] getFilesInPeriod(Date dateOfPeriodToClean) {
String filenameToDelete = fileNamePattern.convert(dateOfPeriodToClean);
File file2Delete = new File(filenameToDelete);
if (fileExistsAndIsFile(file2Delete)) {
return new File[] { file2Delete };
} else {
return new File[0];
}
}
// 清理曆史檔案邏輯,注意要想清理曆史檔案,就一定要設定好 totalSizeCap, 否則,不會進行自動清理!
void capTotalSize(Date now) {
long totalSize = 0;
long totalRemoved = 0;
for (int offset = 0; offset < maxHistory; offset++) {
Date date = rc.getEndOfNextNthPeriod(now, -offset);
File[] matchingFileArray = getFilesInPeriod(date);
descendingSortByLastModified(matchingFileArray);
for (File f : matchingFileArray) {
long size = f.length();
if (totalSize + size > totalSizeCap) {
addInfo("Deleting [" + f + "]" + " of size " + new FileSize(size));
totalRemoved += size;
f.delete();
}
totalSize += size;
}
}
addInfo("Removed " + new FileSize(totalRemoved) + " of files");
}
以上就是一個删除過期日志的邏輯,主要有幾個點:
1. 隻會進行清理 maxHistory 個周期的日志,即隻會倒推 n 個周期内的日志;
2. 隻會清理檔案大小大于 totalSizeCap 大小以後的檔案;(這個檔案強依賴檔案清單的排序,這裡的排序是根據最後修改時間來排的)
3. maxHistory 并非最大保留天數,不要相信坑貨文檔,它隻是一個掃描周期而已,不過這個值在上一步清理時會處理一次!
還有個細節,咱們得再來看看:滾動時機,按天,按小時,按分鐘?
// 滾動時機判定
// ch.qos.logback.core.rolling.helper.RollingCalendar
public Date getEndOfNextNthPeriod(Date now, int periods) {
return innerGetEndOfNextNthPeriod(this, this.periodicityType, now, periods);
}
static private Date innerGetEndOfNextNthPeriod(Calendar cal, PeriodicityType periodicityType, Date now, int numPeriods) {
cal.setTime(now);
switch (periodicityType) {
case TOP_OF_MILLISECOND:
cal.add(Calendar.MILLISECOND, numPeriods);
break;
case TOP_OF_SECOND:
cal.set(Calendar.MILLISECOND, 0);
cal.add(Calendar.SECOND, numPeriods);
break;
case TOP_OF_MINUTE:
cal.set(Calendar.SECOND, 0);
cal.set(Calendar.MILLISECOND, 0);
cal.add(Calendar.MINUTE, numPeriods);
break;
case TOP_OF_HOUR:
cal.set(Calendar.MINUTE, 0);
cal.set(Calendar.SECOND, 0);
cal.set(Calendar.MILLISECOND, 0);
cal.add(Calendar.HOUR_OF_DAY, numPeriods);
break;
case TOP_OF_DAY:
cal.set(Calendar.HOUR_OF_DAY, 0);
cal.set(Calendar.MINUTE, 0);
cal.set(Calendar.SECOND, 0);
cal.set(Calendar.MILLISECOND, 0);
cal.add(Calendar.DATE, numPeriods);
break;
case TOP_OF_WEEK:
cal.set(Calendar.DAY_OF_WEEK, cal.getFirstDayOfWeek());
cal.set(Calendar.HOUR_OF_DAY, 0);
cal.set(Calendar.MINUTE, 0);
cal.set(Calendar.SECOND, 0);
cal.set(Calendar.MILLISECOND, 0);
cal.add(Calendar.WEEK_OF_YEAR, numPeriods);
break;
case TOP_OF_MONTH:
cal.set(Calendar.DATE, 1);
cal.set(Calendar.HOUR_OF_DAY, 0);
cal.set(Calendar.MINUTE, 0);
cal.set(Calendar.SECOND, 0);
cal.set(Calendar.MILLISECOND, 0);
cal.add(Calendar.MONTH, numPeriods);
break;
default:
throw new IllegalStateException("Unknown periodicity type.");
}
return cal.getTime();
}
可以看到其滾動的粒度: TOP_OF_MILLISECOND/TOP_OF_SECOND/TOP_OF_MINUTE/TOP_OF_HOUR/TOP_OF_DAY/TOP_OF_WEEK/TOP_OF_MONTH, 要說起來,粒度還是很細的哦!至于能不能真的有用,另說了!
總結下logback的滾動方式!
1. 在寫入的時機進行滾動時機檢查,合适則進行滾動;
2. 同步滾動操作,保證線程安全;
3. 使用重命名的方式進行滾動檔案處理,如果失敗會嘗試一次不同分區的檔案複制操作;
4. 删除過期日志有兩個時機,一個是判斷目前周期前 n 個周期檔案,如果有則删除;
5. 對于設定了最大檔案大小限制時,另外進行允許周期内的檔案大小判定,超過大小後按修改時間最早删除;
6. 觸發滾動時機後,進行異步删除,一般不影響業務;
3. 第三方工具:經典版 cronolog
cronolog 是一個很古老的日志滾動工具了(應該已經不維護了)。它可以接收應用的輸出日志,然後按照規則進行日志存儲,比如按照年月日時分秒來儲存檔案!
在網上其資料也已經不是很多了,很多人為了下載下傳一個安裝包也是絞盡腦汁啊!我也提供一個便捷安裝包吧: 點此下載下傳;
其 github 項目位址: https://github.com/fordmason/cronolog , 你完全可以自己去下載下傳一個完全的包,自己安裝!
不過我還是要說一下其他兩個安裝方式:
1. 直接使用 yum 源安裝;(好像是要安裝 epel 源) (推薦)
yum install cronolog -y
2. 使用上面下載下傳的包,直接解壓即可
tar -zxvf cronolog-bin.tar.gz -C /
3. 使用網上别人提供的源碼安裝
hehe...
說了這麼多,還不是為了使用,如何與應用結合?
其實隻需要在你原來應用啟動的後面再加上如下指令就可以了!
$> | /usr/local/sbin/cronolog -S /var/logs/ai_ln.out /var/logs/ai.%Y-%m-%d-%H.out
完整的操作示例如下:
exec nohup java -jar /www/aproj\.jar 2>&1 | /usr/local/sbin/cronolog -S /var/logs/ai_ln.out /var/logs/ai.%Y-%m-%d-%H.out >> /dev/null &
如上指令是網上大部分人是這麼寫的,但是在某些情況下會有問題。比如我想遠端啟動這個服務的時候,就會一直拿不到結果!為啥?反正寫成下面這個就完美了!即在 cronolog 之後,再加一個重定向輸出 2>&1 。
exec nohup java -jar /www/aproj\.jar 2>&1 | /usr/local/sbin/cronolog -S /var/logs/ai_ln.out /var/logs/ai.%Y-%m-%d-%H.out >> /dev/null 2>&1 &
那麼,這個工具和應用自己輸出日志相比,有什麼好處嗎?它是怎麼實作的呢?
好處前面已經說了,對代碼無侵入,控制更靈活!
其實作原理為,接收一個标準的輸入流,然後寫入到相應檔案即可!它不負責檔案的删除,是以删除過期檔案還得依賴另外的腳本!
其主體源碼如下:(C語言)
/* Loop, waiting for data on standard input */
for (;;)
{
/**
* Read a buffer's worth of log file data, exiting on errors
* or end of file.
*/
n_bytes_read = read(0, read_buf, sizeof read_buf);
if (n_bytes_read == 0)
{
exit(3);
}
if (errno == EINTR)
{
continue;
}
else if (n_bytes_read < 0)
{
exit(4);
}
time_now = time(NULL) + time_offset;
/**
* If the current period has finished and there is a log file
* open, close the log file
*/
if ((time_now >= next_period) && (log_fd >= 0))
{
close(log_fd);
log_fd = -1;
}
/**
* If there is no log file open then open a new one.
*/
if (log_fd < 0)
{
log_fd = new_log_file(template, linkname, linktype, prevlinkname,
periodicity, period_multiple, period_delay,
filename, sizeof (filename), time_now, &next_period);
}
DEBUG(("%s (%d): wrote message; next period starts at %s (%d) in %d secs\n",
timestamp(time_now), time_now,
timestamp(next_period), next_period,
next_period - time_now));
/**
* Write out the log data to the current log file.
*/
if (write(log_fd, read_buf, n_bytes_read) != n_bytes_read)
{
perror(filename);
exit(5);
}
}
大概操作就是:
1. cronolog 程序開啟後,會一直死循環,除非遇到錯誤如應用關閉等;
2. 阻塞從标準輸入讀取資訊,讀取到後,再進行檔案操作;
3. 每次讀取内容後判斷是否到達需要新滾動的周期,如果到了,就把原來的檔案close掉,并重新建立一個用于寫的檔案;
4. 隻管向打開的檔案中寫入緩沖内容即可;
5. 所有讀入資料是基于管道操作的,簡單實用;
看起來很簡單啊!會不會有什麼問題呢?應該不會吧,它可是經過時間考驗的哦。越是簡單的,往往越是可靠的!
看着上面代碼,有同學肯定要說了,這麼簡單的代碼誰不會啊,自己順手就來一個shell搞定。 且不論你的shell寫得是否可靠,但是你基于 shell, 别人是基于c的,恐怕不是一個量級的哦!
最後,還有個問題我們要處理下,那就是過期日志的清理問題?
這個簡單的腳本是不會給你做了,或者說我沒有發現它有這功能;是以,隻能自己寫腳本清理了!一行代碼搞定!
# vim clean_log.sh
find /var/logs/ai -mtime +8 -name "ai.*out" -exec rm -rf {} \;
# 然後在 crontab 中加入執行時機即可,一般一天一次!
0 0 * * * sh clean_log.sh
搞定!
4. 完整版shell清理日志腳本
一行代碼可以清理檔案,當然,你也可以寫完善點:
#!/bin/bash
log_path_prefix=/opt/springboot/logs
expire_hours=3;
expire_minutes=$[ expire_hours * 60 ];
now_time=`date "+%Y-%m-%d %H:%M:%S"`
echo "-At $now_time";
# del function
function del_expire_logs() {
find_cmd="find $1 -mmin +${2} -type f "
if [ "$3" != "" ]; then
find_cmd="$find_cmd -name '$3'";
fi;
echo " -Cmd: $find_cmd";
f_expired_files=`eval $find_cmd`;
echo " -Find result: $f_expired_files";
if [ "$f_expired_files" != "" ]; then
file_list=($f_expired_files);
for item in ${file_list[@]};
do
echo " -Del file: $item";
rm -rf $item;
done;
fi;
}
del_expire_logs $log_path_prefix $expire_minutes "*.out";
log_path_prefix2=/opt/logs
$expire_minutes2=2880; # for 2 day
del_expire_logs $log_path_prefix2 $expire_minutes2;
以上,就是一些日志滾動的實作及原了解析了!是不是有一種豁然開朗的感覺?哈哈。。
事情其實并沒有想像中的難!
不要害怕今日的苦,你要相信明天,更苦!