博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Boost.log库使用方法
阅读量:2066 次
发布时间:2019-04-29

本文共 5858 字,大约阅读时间需要 19 分钟。

做服务器开发首要做的事情,就是将日志系统建立起来。这个系统也是方便我们检查服务器的问题。

boost.log库结构

初始化日志系统

这是一个初始化日志系统的代码。我这里是使用了一个配置文件来做boost.log的设置的。所以代码写起来是非常的简洁。

#include 
#include
#include
#include
#include
#include
#include
bool init_log_environment(std::string _cfg){ namespace logging = boost::log; using namespace logging::trivial; if (!boost::filesystem::exists("./log/")) { boost::filesystem::create_directory("./log/"); } logging::add_common_attributes(); logging::register_simple_formatter_factory
("Severity"); logging::register_simple_filter_factory
("Severity"); std::ifstream file(_cfg); try { logging::init_from_stream(file); } catch (const std::exception& e) { std::cout << "init_logger is fail, read log config file fail. curse: " << e.what() << std::endl; exit(-2); } return true;}

其实最核心的代码就是logging::init_from_stream这个函数。他能加载你对日志的配置信息。这样比起一般将sink设置,日志级别设置写到代码里面的方式要更加简单。接下来我们来看看配置文件的写法。

配置日志

[Core]DisableLogging=falseFilter="%Severity% >= trace"[Sinks.console]Filter="%Severity% > trace "Destination=ConsoleFormat="[%TimeStamp%] [%Severity%] [%Channel%] %Message%"Asynchronous=falseAutoFlush=true#    trace,[Sinks.trace]Filter="%Severity% = trace"Destination=TextFileFormat="[%TimeStamp%] [%Severity%] %Message%"Asynchronous=falseAutoFlush=trueRotationSize= 31457280FileName="./log/trace_%03N.log"#    debug,[Sinks.debug]Filter="%Severity% = debug"Destination=TextFileFormat="[%TimeStamp%] [%Severity%] [%Channel%] %Message%"Asynchronous=falseAutoFlush=trueRotationTimePoint="00:00:00"FileName="./log/debug_%Y%m%d_%H%M%S.log"#    info,[Sinks.info]Filter="%Severity% = info"Destination=TextFileFormat="[%TimeStamp%] [%Severity%] [%Channel%] %Message%"Asynchronous=falseAutoFlush=trueRotationTimePoint="00:00:00"FileName="./log/info_%Y%m%d_%H%M%S.log"#    warning,[Sinks.warning]Filter="%Severity% = warning"Destination=TextFileFormat="[%TimeStamp%] [%Severity%] [%Channel%] %Message%"Asynchronous=falseAutoFlush=trueRotationTimePoint="00:00:00"FileName="./log/warning_%Y%m%d_%H%M%S.log"#    error,[Sinks.error]Filter="%Severity% = error"Destination=TextFileFormat="[%TimeStamp%] [%Severity%] [%Channel%] %Message%"Asynchronous=falseAutoFlush=trueRotationTimePoint="00:00:00"FileName="./log/erro_%Y%m%d_%H%M%S.log"#    fatal[Sinks.fatal]Filter="%Severity% = fatal"Destination=TextFileFormat="[%TimeStamp%] [%Severity%] [%Channel%] %Message%"Asynchronous=trueAutoFlush=trueRotationTimePoint="00:00:00"FileName="./log/fatal_%Y%m%d_%H%M%S.log"

在这个配置文件中,我们可以直接定义logging core的日志输出级别。设置每个级别的日志的格式。比较屌的是他这里可以通过配置来让日志翻滚起来。仔细看里面的日志配置,他里面是能支持按照时间来写日志的。

里面其实还能自己建立对于特定的日志做过滤条件来做分析:

Filter="%Severity% = fatal"
Parameter Format Description
FileName File name pattern The file name pattern for the sink backend. This parameter is mandatory.
Format Format string as described here Log record formatter to be used by the sink. If not specified, the default formatter is used.
AutoFlush “true” or “false” Enables or disables the auto-flush feature of the backend. If not specified, the default value false is assumed.
RotationSize Unsigned integer File size, in bytes, upon which file rotation will be performed. If not specified, no size-based rotation will be made.
RotationInterval Unsigned integer Time interval, in seconds, upon which file rotation will be performed. See also the RotationTimePoint parameter and the note below.
RotationTimePoint Time point format string, see below Time point or a predicate that detects at what moment of time to perform log file rotation. See also the RotationInterval parameter and the note below.
Target File system path to a directory Target directory name, in which the rotated files will be stored. If this parameter is specified, rotated file collection is enabled. Otherwise the feature is not enabled, and all corresponding parameters are ignored.
MaxSize Unsigned integer Total size of files in the target directory, in bytes, upon which the oldest file will be deleted. If not specified, no size-based file cleanup will be performed.
MinFreeSpace Unsigned integer Minimum free space in the target directory, in bytes, upon which the oldest file will be deleted. If not specified, no space-based file cleanup will be performed.
MaxFiles Unsigned integer Total number of files in the target directory, upon which the oldest file will be deleted. If not specified, no count-based file cleanup will be performed.
ScanForFiles “All” or “Matching” Mode of scanning for old files in the target directory, see scan_method. If not specified, no scanning will be performed.

可以看到不仅仅是能支持RotationTimePoint,我当前例子是用的0点刷新的方式。而且还支持使用间隔时间刷新、文件大小刷新。而且还能指定Rotation文件备份的位置。

如何写日志

头文件中引入

#include 
#include
namespace logging = boost::log;using namespace logging::trivial;namespace src = boost::log::sources;//在自己的class里面定义一个日志输出通道对象src::severity_channel_logger
scl;

cpp文件中的写写法

// 构造函数的时候,初始化日志输出通道对象xxxx::xxx(void) : scl(keywords::channel = "xxxx_class")// 按照日志级别写入日志BOOST_LOG_SEV(scl, debug) << __FUNCTION__ << ":" << __LINE__ << " success, fd: " << fd << ", ip: " << str_addr;

输出结果

[2016-08-25 10:29:18.328592] [debug] [config] config::show_config:53 port:9210[2016-08-25 10:29:18.333095] [debug] [config] config::show_config:54 name:abelserver[2016-08-25 10:29:18.335597] [debug] [config] config::show_config:55 log_config:log_conf.ini[2016-08-25 10:29:18.338599] [debug] [config] config::show_config:56 redis_host:127.0.0.1[2016-08-25 10:29:18.342104] [debug] [config] config::show_config:57 redis_port:6379# 输出文件名称:erro_20160325_145443.log

在做一些调试工作的时候,可能需要将数据dump出来看,就可以使用:

void on_receive(std::vector< unsigned char > const& packet){    // Outputs something like "Packet received: 00 01 02 0a 0b 0c"    BOOST_LOG(lg) << "Packet received: " << logging::dump(packet.data(), packet.size());}

使用起来还是非常简单的。而且已经满足了大多数的需求。boost.log库是能支持自定义的sink,具体可以需要查阅

你可能感兴趣的文章
利用栈实现DFS
查看>>
逆序对的数量(递归+归并思想)
查看>>
数的范围(二分查找上下界)
查看>>
算法导论阅读顺序
查看>>
Windows程序设计:直线绘制
查看>>
linux之CentOS下文件解压方式
查看>>
Django字段的创建并连接MYSQL
查看>>
div标签布局的使用
查看>>
HTML中表格的使用
查看>>
(模板 重要)Tarjan算法解决LCA问题(PAT 1151 LCA in a Binary Tree)
查看>>
(PAT 1154) Vertex Coloring (图的广度优先遍历)
查看>>
(PAT 1115) Counting Nodes in a BST (二叉查找树-统计指定层元素个数)
查看>>
(PAT 1143) Lowest Common Ancestor (二叉查找树的LCA)
查看>>
(PAT 1061) Dating (字符串处理)
查看>>
(PAT 1118) Birds in Forest (并查集)
查看>>
数据结构 拓扑排序
查看>>
(PAT 1040) Longest Symmetric String (DP-最长回文子串)
查看>>
(PAT 1145) Hashing - Average Search Time (哈希表冲突处理)
查看>>
(1129) Recommendation System 排序
查看>>
PAT1090 Highest Price in Supply Chain 树DFS
查看>>