Verified Commit 6d30ca4e authored by Marc Vef's avatar Marc Vef
Browse files

Merge branch 'marc/stats_review' into rnou/stats_prometheus

parents b5907694 a6344c72
Pipeline #2427 passed with stages
in 29 minutes and 1 second
......@@ -10,18 +10,20 @@ to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
### New
- Added Stats ([!132](https://storage.bsc.es/gitlab/hpc/gekkofs/-/merge_requests/132)) gathering in servers
- Stats output can be enabled with --output-stats <filename>
- --enable-collection collects normal stats
- --enable-chunkstats collects extended chunk stats
- Added statistics gathering on daemons ([!132](https://storage.bsc.es/gitlab/hpc/gekkofs/-/merge_requests/132)).
- Stats output can be enabled with:
- `--enable-collection` collects normal statistics.
- `--enable-chunkstats` collects extended chunk statistics.
- Statistics output to file is controlled by `--output-stats <filename>`
- Added Prometheus support for outputting
statistics ([!132](https://storage.bsc.es/gitlab/hpc/gekkofs/-/merge_requests/132)):
- Prometheus dependency optional and enabled at compile time with the CMake argument `GKFS_ENABLE_PROMETHEUS`.
- `--enable-prometheus` enables statistics pushing to Prometheus if statistics are enabled.
- `--prometheus-gateway` sets an IP and port for the Prometheus connection.
- Added new experimental metadata backend:
Parallax ([!110](https://storage.bsc.es/gitlab/hpc/gekkofs/-/merge_requests/110)).
- Added support to use multiple metadata backends.
- Added `--clean-rootdir-finish` argument to remove rootdir/metadir at the end when the daemon finishes.
- Added Prometheus Output ([!132](https://storage.bsc.es/gitlab/hpc/gekkofs/-/merge_requests/132))
- New option to define gateway --prometheus-gateway <gateway:port>
- Prometheus output is optional with "GKFS_ENABLE_PROMETHEUS"
- --enable-prometheus creates a thread to push the metrics.
### Changed
......
......@@ -109,8 +109,11 @@ Options:
RocksDB is default if not set. Parallax support is experimental.
Note, parallaxdb creates a file called rocksdbx with 8GB created in metadir.
--parallaxsize TEXT parallaxdb - metadata file size in GB (default 8GB), used only with new files
--output-stats TEXT Enables the output of the stats on the FILE (each 10s) for debug
--prometheus-gateway TEXT Defines the ip:port of the Prometheus Push gateway
--enable-collection Enables collection of general statistics. Output requires either the --output-stats or --enable-prometheus argument.
--enable-chunkstats Enables collection of data chunk statistics in I/O operations.Output requires either the --output-stats or --enable-prometheus argument.
--output-stats TEXT Creates a thread that outputs the server stats each 10s to the specified file.
--enable-prometheus Enables prometheus output and a corresponding thread.
--prometheus-gateway TEXT Defines the prometheus gateway <ip:port> (Default 127.0.0.1:9091).
--version Print version and exit.
```
......@@ -233,22 +236,30 @@ Then, the `examples/distributors/guided/generate.py` scrpt is used to create the
Finally, modify `guided_config.txt` to your distribution requirements.
### Metadata Backends
There are two different metadata backends in GekkoFS. The default one uses `rocksdb`, however an alternative based on `PARALLAX` from `FORTH`
is available.
To enable it use the `-DGKFS_ENABLE_PARALLAX:BOOL=ON` option, you can also disable `rocksdb` with `-DGKFS_ENABLE_ROCKSDB:BOOL=OFF`.
There are two different metadata backends in GekkoFS. The default one uses `rocksdb`, however an alternative based
on `PARALLAX` from `FORTH`
is available. To enable it use the `-DGKFS_ENABLE_PARALLAX:BOOL=ON` option, you can also disable `rocksdb`
with `-DGKFS_ENABLE_ROCKSDB:BOOL=OFF`.
Once it is enabled, `--dbbackend` option will be functional.
### Stats
Stats from each server are written to the file specified with `--output-stats <FILE>`. Collection is done with two separate flags `--enable-collection` and `--enable-chunkstats`. For normal and extended chunk stats. The extended chunk stats stores each chunk acccess.
Pushing stats to Prometheus is enabled with the `-DGKFS_ENABLE_PROMETHEUS` and the flag `--enable-prometheus`. We are using a push model.
### Statistics
GekkoFS daemons are able to output general operations (`--enable-collection`) and data chunk
statistics (`--enable-chunkstats`) to a specified output file via `--output-stats <FILE>`. Prometheus can also be used
instead or in addition to the output file. It must be enabled at compile time via the CMake
argument `-DGKFS_ENABLE_PROMETHEUS` and the daemon argument `--enable-prometheus`. The corresponding statistics are then
pushed to the Prometheus instance.
### Acknowledgment
This software was partially supported by the EC H2020 funded NEXTGenIO project (Project ID: 671951, www.nextgenio.eu).
This software was partially supported by the ADA-FS project under the SPPEXA project (http://www.sppexa.de/) funded by the DFG.
This software was partially supported by the ADA-FS project under the SPPEXA project (http://www.sppexa.de/) funded by
the DFG.
This software is partially supported by the FIDIUM project funded by the DFG.
This software is partially supported by the ADMIRE project (https://www.admire-eurohpc.eu/) funded by the European Union’s Horizon 2020 JTI-EuroHPC Research and Innovation Programme (Grant 956748).
This software is partially supported by the ADMIRE project (https://www.admire-eurohpc.eu/) funded by the European
Union’s Horizon 2020 JTI-EuroHPC Research and Innovation Programme (Grant 956748).
......@@ -79,8 +79,11 @@ Options:
RocksDB is default if not set. Parallax support is experimental.
Note, parallaxdb creates a file called rocksdbx with 8GB created in metadir.
--parallaxsize TEXT parallaxdb - metadata file size in GB (default 8GB), used only with new files
--output-stats TEXT Outputs the stats to the file each 10s.
--prometheus-gateway TEXT Defines the ip:port of the Prometheus Push gateway
--enable-collection Enables collection of general statistics. Output requires either the --output-stats or --enable-prometheus argument.
--enable-chunkstats Enables collection of data chunk statistics in I/O operations.Output requires either the --output-stats or --enable-prometheus argument.
--output-stats TEXT Creates a thread that outputs the server stats each 10s to the specified file.
--enable-prometheus Enables prometheus output and a corresponding thread.
--prometheus-gateway TEXT Defines the prometheus gateway <ip:port> (Default 127.0.0.1:9091).
--version Print version and exit.
````
......
......@@ -118,25 +118,25 @@ private:
std::map<IopsOp, std::atomic<unsigned long>>
IOPS; ///< Stores total value for global mean
iops_mean; ///< Stores total value for global mean
std::map<SizeOp, std::atomic<unsigned long>>
SIZE; ///< Stores total value for global mean
size_mean; ///< Stores total value for global mean
std::mutex time_iops_mutex;
std::mutex size_iops_mutex;
std::map<IopsOp,
std::deque<std::chrono::time_point<std::chrono::steady_clock>>>
TimeIops; ///< Stores timestamp when an operation comes removes if
///< first operation if > 10 minutes Different means will
///< be stored and cached 1 minuted
time_iops; ///< Stores timestamp when an operation comes removes if
///< first operation if > 10 minutes Different means will
///< be stored and cached 1 minuted
std::map<SizeOp, std::deque<std::pair<
std::chrono::time_point<std::chrono::steady_clock>,
unsigned long long>>>
TimeSize; ///< For size operations we need to store the timestamp
///< and the size
time_size; ///< For size operations we need to store the timestamp
///< and the size
std::thread t_output; ///< Thread that outputs stats info
......@@ -159,10 +159,10 @@ private:
std::map<std::pair<std::string, unsigned long long>,
std::atomic<unsigned int>>
chunkRead; ///< Stores the number of times a chunk/file is read
chunk_reads; ///< Stores the number of times a chunk/file is read
std::map<std::pair<std::string, unsigned long long>,
std::atomic<unsigned int>>
chunkWrite; ///< Stores the number of times a chunk/file is write
chunk_writes; ///< Stores the number of times a chunk/file is write
/**
* @brief Called by output to generate CHUNK map
......@@ -189,8 +189,8 @@ private:
///< Prometheus cpp)
Family<Summary>* family_summary; ///< Prometheus SIZE counter (managed by
///< Prometheus cpp)
std::map<IopsOp, Counter*> iops_Prometheus; ///< Prometheus IOPS metrics
std::map<SizeOp, Summary*> size_Prometheus; ///< Prometheus SIZE metrics
std::map<IopsOp, Counter*> iops_prometheus; ///< Prometheus IOPS metrics
std::map<SizeOp, Summary*> size_prometheus; ///< Prometheus SIZE metrics
#endif
public:
......
......@@ -52,18 +52,20 @@ target_sources(statistics
if(GKFS_ENABLE_PROMETHEUS)
find_package(CURL REQUIRED)
find_package(prometheus-cpp REQUIRED)
set(PROMETHEUS_LIB
prometheus-cpp-pull
prometheus-cpp-push
prometheus-cpp-core
curl)
set(PROMETHEUS_LINK_LIBRARIES
prometheus-cpp::pull
prometheus-cpp::push
prometheus-cpp::core
curl)
target_include_directories(statistics PRIVATE ${prometheus-cpp_INCLUDE_DIR})
endif()
target_link_libraries(statistics
PRIVATE
${PROMETHEUS_LIB}
PRIVATE
${PROMETHEUS_LINK_LIBRARIES}
)
if(GKFS_ENABLE_CODE_COVERAGE)
target_code_coverage(distributor AUTO)
target_code_coverage(statistics AUTO)
......
......@@ -61,7 +61,7 @@ Stats::setup_Prometheus(const std::string& gateway_ip,
.Register(*registry);
for(auto e : all_IopsOp) {
iops_Prometheus[e] = &family_counter->Add(
iops_prometheus[e] = &family_counter->Add(
{{"operation", IopsOp_s[static_cast<int>(e)]}});
}
......@@ -71,7 +71,7 @@ Stats::setup_Prometheus(const std::string& gateway_ip,
.Register(*registry);
for(auto e : all_SizeOp) {
size_Prometheus[e] = &family_summary->Add(
size_prometheus[e] = &family_summary->Add(
{{"operation", SizeOp_s[static_cast<int>(e)]}},
Summary::Quantiles{});
}
......@@ -82,7 +82,9 @@ Stats::setup_Prometheus(const std::string& gateway_ip,
Stats::Stats(bool enable_chunkstats, bool enable_prometheus,
const std::string& stats_file,
const std::string& prometheus_gateway) {
const std::string& prometheus_gateway)
: enable_prometheus_(enable_prometheus),
enable_chunkstats_(enable_chunkstats) {
// Init clocks
start = std::chrono::steady_clock::now();
......@@ -91,22 +93,20 @@ Stats::Stats(bool enable_chunkstats, bool enable_prometheus,
// Statistaclly will be negligible... and we get a faster flow
for(auto e : all_IopsOp) {
IOPS[e] = 0;
TimeIops[e].push_back(std::chrono::steady_clock::now());
iops_mean[e] = 0;
time_iops[e].push_back(std::chrono::steady_clock::now());
}
for(auto e : all_SizeOp) {
SIZE[e] = 0;
TimeSize[e].push_back(pair(std::chrono::steady_clock::now(), 0.0));
size_mean[e] = 0;
time_size[e].push_back(pair(std::chrono::steady_clock::now(), 0.0));
}
#ifdef GKFS_ENABLE_PROMETHEUS
auto pos_separator = prometheus_gateway.find(":");
auto pos_separator = prometheus_gateway.find(':');
setup_Prometheus(prometheus_gateway.substr(0, pos_separator),
prometheus_gateway.substr(pos_separator + 1));
#endif
enable_chunkstats_ = enable_chunkstats;
enable_prometheus_ = enable_prometheus;
if(!stats_file.empty() || enable_prometheus_) {
output_thread_ = true;
......@@ -119,18 +119,19 @@ Stats::Stats(bool enable_chunkstats, bool enable_prometheus,
Stats::~Stats() {
if(output_thread_) {
running = false;
t_output.join();
if(t_output.joinable())
t_output.join();
}
}
void
Stats::add_read(const std::string& path, unsigned long long chunk) {
chunkRead[pair(path, chunk)]++;
chunk_reads[pair(path, chunk)]++;
}
void
Stats::add_write(const std::string& path, unsigned long long chunk) {
chunkWrite[pair(path, chunk)]++;
chunk_writes[pair(path, chunk)]++;
}
......@@ -138,17 +139,17 @@ void
Stats::output_map(std::ofstream& output) {
// Ordering
map<unsigned int, std::set<pair<std::string, unsigned long long>>>
orderWrite;
order_write;
map<unsigned int, std::set<pair<std::string, unsigned long long>>>
orderRead;
order_read;
for(const auto& i : chunkRead) {
orderRead[i.second].insert(i.first);
for(const auto& i : chunk_reads) {
order_read[i.second].insert(i.first);
}
for(const auto& i : chunkWrite) {
orderWrite[i.second].insert(i.first);
for(const auto& i : chunk_writes) {
order_write[i.second].insert(i.first);
}
auto chunkMap =
......@@ -165,25 +166,25 @@ Stats::output_map(std::ofstream& output) {
}
};
chunkMap("READ CHUNK MAP", orderRead, output);
chunkMap("WRITE CHUNK MAP", orderWrite, output);
chunkMap("READ CHUNK MAP", order_read, output);
chunkMap("WRITE CHUNK MAP", order_write, output);
}
void
Stats::add_value_iops(enum IopsOp iop) {
IOPS[iop]++;
iops_mean[iop]++;
auto now = std::chrono::steady_clock::now();
const std::lock_guard<std::mutex> lock(time_iops_mutex);
if((now - TimeIops[iop].front()) > std::chrono::duration(10s)) {
TimeIops[iop].pop_front();
} else if(TimeIops[iop].size() >= gkfs::config::stats::max_stats)
TimeIops[iop].pop_front();
if((now - time_iops[iop].front()) > std::chrono::duration(10s)) {
time_iops[iop].pop_front();
} else if(time_iops[iop].size() >= gkfs::config::stats::max_stats)
time_iops[iop].pop_front();
TimeIops[iop].push_back(std::chrono::steady_clock::now());
time_iops[iop].push_back(std::chrono::steady_clock::now());
#ifdef GKFS_ENABLE_PROMETHEUS
if(enable_prometheus_) {
iops_Prometheus[iop]->Increment();
iops_prometheus[iop]->Increment();
}
#endif
}
......@@ -191,17 +192,17 @@ Stats::add_value_iops(enum IopsOp iop) {
void
Stats::add_value_size(enum SizeOp iop, unsigned long long value) {
auto now = std::chrono::steady_clock::now();
SIZE[iop] += value;
size_mean[iop] += value;
const std::lock_guard<std::mutex> lock(size_iops_mutex);
if((now - TimeSize[iop].front().first) > std::chrono::duration(10s)) {
TimeSize[iop].pop_front();
} else if(TimeSize[iop].size() >= gkfs::config::stats::max_stats)
TimeSize[iop].pop_front();
if((now - time_size[iop].front().first) > std::chrono::duration(10s)) {
time_size[iop].pop_front();
} else if(time_size[iop].size() >= gkfs::config::stats::max_stats)
time_size[iop].pop_front();
TimeSize[iop].push_back(pair(std::chrono::steady_clock::now(), value));
time_size[iop].push_back(pair(std::chrono::steady_clock::now(), value));
#ifdef GKFS_ENABLE_PROMETHEUS
if(enable_prometheus_) {
size_Prometheus[iop]->Observe(value);
size_prometheus[iop]->Observe(value);
}
#endif
if(iop == SizeOp::read_size)
......@@ -220,7 +221,8 @@ Stats::get_mean(enum SizeOp sop) {
auto now = std::chrono::steady_clock::now();
auto duration =
std::chrono::duration_cast<std::chrono::seconds>(now - start);
double value = (double) SIZE[sop] / (double) duration.count();
double value = static_cast<double>(size_mean[sop]) /
static_cast<double>(duration.count());
return value;
}
......@@ -229,7 +231,8 @@ Stats::get_mean(enum IopsOp iop) {
auto now = std::chrono::steady_clock::now();
auto duration =
std::chrono::duration_cast<std::chrono::seconds>(now - start);
double value = (double) IOPS[iop] / (double) duration.count();
double value = static_cast<double>(iops_mean[iop]) /
static_cast<double>(duration.count());
return value;
}
......@@ -239,7 +242,7 @@ Stats::get_four_means(enum SizeOp sop) {
std::vector<double> results = {0, 0, 0, 0};
auto now = std::chrono::steady_clock::now();
const std::lock_guard<std::mutex> lock(size_iops_mutex);
for(auto e : TimeSize[sop]) {
for(auto e : time_size[sop]) {
auto duration =
std::chrono::duration_cast<std::chrono::minutes>(now - e.first)
.count();
......@@ -269,7 +272,7 @@ Stats::get_four_means(enum IopsOp iop) {
std::vector<double> results = {0, 0, 0, 0};
auto now = std::chrono::steady_clock::now();
const std::lock_guard<std::mutex> lock(time_iops_mutex);
for(auto e : TimeIops[iop]) {
for(auto e : time_iops[iop]) {
auto duration =
std::chrono::duration_cast<std::chrono::minutes>(now - e)
.count();
......@@ -331,7 +334,7 @@ Stats::output(std::chrono::seconds d, std::string file_output) {
times++;
if(enable_chunkstats_ and of) {
if(enable_chunkstats_ && of) {
if(times % 4 == 0)
output_map(of.value());
}
......@@ -340,7 +343,7 @@ Stats::output(std::chrono::seconds d, std::string file_output) {
gateway->Push();
}
#endif
while(running and a < d) {
while(running && a < d) {
a += 1s;
std::this_thread::sleep_for(1s);
}
......
......@@ -295,9 +295,10 @@ init_environment() {
#endif
// Initialize Stats
GKFS_DATA->stats(std::make_shared<gkfs::utils::Stats>(
GKFS_DATA->enable_chunkstats(), GKFS_DATA->enable_prometheus(),
GKFS_DATA->stats_file(), GKFS_DATA->prometheus_gateway()));
if(GKFS_DATA->enable_stats() || GKFS_DATA->enable_chunkstats())
GKFS_DATA->stats(std::make_shared<gkfs::utils::Stats>(
GKFS_DATA->enable_chunkstats(), GKFS_DATA->enable_prometheus(),
GKFS_DATA->stats_file(), GKFS_DATA->prometheus_gateway()));
// Initialize data backend
auto chunk_storage_path = fmt::format("{}/{}", GKFS_DATA->rootdir(),
......@@ -654,40 +655,60 @@ parse_input(const cli_options& opts, const CLI::App& desc) {
GKFS_DATA->parallax_size_md(stoi(opts.parallax_size));
}
if(desc.count("--output-stats")) {
auto stats_file = opts.stats_file;
GKFS_DATA->stats_file(stats_file);
GKFS_DATA->spdlogger()->debug("{}() Stats Enabled: '{}'", __func__,
stats_file);
} else {
GKFS_DATA->stats_file("");
GKFS_DATA->spdlogger()->debug("{}() Stats Output Disabled", __func__);
}
/*
* Statistics collection arguments
*/
if(desc.count("--enable-collection")) {
GKFS_DATA->enable_stats(true);
GKFS_DATA->spdlogger()->debug("{}() Collection Enabled", __func__);
GKFS_DATA->spdlogger()->info("{}() Statistic collection enabled",
__func__);
}
if(desc.count("--enable-chunkstats")) {
GKFS_DATA->enable_chunkstats(true);
GKFS_DATA->spdlogger()->debug("{}() ChunkStats Enabled", __func__);
GKFS_DATA->spdlogger()->info("{}() Chunk statistic collection enabled",
__func__);
}
#ifdef GKFS_ENABLE_PROMETHEUS
if(desc.count("--enable-prometheus")) {
GKFS_DATA->enable_prometheus(true);
GKFS_DATA->spdlogger()->debug("{}() Prometheus Enabled", __func__);
if(GKFS_DATA->enable_stats() || GKFS_DATA->enable_chunkstats())
GKFS_DATA->spdlogger()->info(
"{}() Statistics output to Prometheus enabled", __func__);
else
GKFS_DATA->spdlogger()->warn(
"{}() Prometheus statistic output enabled but no stat collection is enabled. There will be no output to Prometheus",
__func__);
}
if(desc.count("--prometheus-gateway")) {
auto gateway = opts.prometheus_gateway;
GKFS_DATA->prometheus_gateway(gateway);
GKFS_DATA->spdlogger()->debug("{}() Prometheus Gateway: '{}'", __func__,
gateway);
if(GKFS_DATA->enable_prometheus())
GKFS_DATA->spdlogger()->info("{}() Prometheus gateway set to '{}'",
__func__, gateway);
else
GKFS_DATA->spdlogger()->warn(
"{}() Prometheus gateway was set but Prometheus is disabled.");
}
#endif
if(desc.count("--output-stats")) {
auto stats_file = opts.stats_file;
GKFS_DATA->stats_file(stats_file);
if(GKFS_DATA->enable_stats() || GKFS_DATA->enable_chunkstats())
GKFS_DATA->spdlogger()->info(
"{}() Statistics are written to file '{}'", __func__,
stats_file);
else
GKFS_DATA->spdlogger()->warn(
"{}() --output-stats argument used but no stat collection is enabled. There will be no output to file '{}'",
__func__, stats_file);
} else {
GKFS_DATA->stats_file("");
GKFS_DATA->spdlogger()->debug("{}() Statistics output disabled",
__func__);
}
}
/**
......@@ -755,24 +776,25 @@ main(int argc, const char* argv[]) {
desc.add_option("--parallaxsize", opts.parallax_size,
"parallaxdb - metadata file size in GB (default 8GB), "
"used only with new files");
desc.add_option(
"--output-stats", opts.stats_file,
"Creates a thread that outputs the server stats each 10s, to the file specified");
desc.add_flag(
"--enable-collection",
"Enables collection of normal stats, independent of the output-stats option");
"Enables collection of general statistics. "
"Output requires either the --output-stats or --enable-prometheus argument.");
desc.add_flag(
"--enable-chunkstats",
"Enables collection of chunkstats stats, independent of the output-stats option")
;
"Enables collection of data chunk statistics in I/O operations."
"Output requires either the --output-stats or --enable-prometheus argument.");
desc.add_option(
"--output-stats", opts.stats_file,
"Creates a thread that outputs the server stats each 10s to the specified file.");
#ifdef GKFS_ENABLE_PROMETHEUS
desc.add_flag(
"--enable-prometheus",
"Enables prometheus output, enables thread");
"Enables prometheus output and a corresponding thread.");
desc.add_option(
"--prometheus-gateway", opts.prometheus_gateway,
"Defines the prometheus gateway, default is 127.0.0.1:9091");
"Defines the prometheus gateway <ip:port> (Default 127.0.0.1:9091).");
#endif
desc.add_flag("--version", "Print version and exit.");
......
......@@ -114,10 +114,7 @@ rpc_srv_write(hg_handle_t handle) {
"{}() path: '{}' chunk_start '{}' chunk_end '{}' chunk_n '{}' total_chunk_size '{}' bulk_size: '{}' offset: '{}'",
__func__, in.path, in.chunk_start, in.chunk_end, in.chunk_n,
in.total_chunk_size, bulk_size, in.offset);
if(GKFS_DATA->enable_stats()) {
GKFS_DATA->stats()->add_value_size(
gkfs::utils::Stats::SizeOp::write_size, bulk_size);
}
#ifdef GKFS_ENABLE_AGIOS
int* data;
......@@ -352,7 +349,13 @@ rpc_srv_write(hg_handle_t handle) {
*/
GKFS_DATA->spdlogger()->debug("{}() Sending output response {}", __func__,
out.err);
return gkfs::rpc::cleanup_respond(&handle, &in, &out, &bulk_handle);
auto handler_ret =
gkfs::rpc::cleanup_respond(&handle, &in, &out, &bulk_handle);
if(GKFS_DATA->enable_stats()) {
GKFS_DATA->stats()->add_value_size(
gkfs::utils::Stats::SizeOp::write_size, bulk_size);
}
return handler_ret;
}
/**
......@@ -414,10 +417,6 @@ rpc_srv_read(hg_handle_t handle) {
"{}() path: '{}' chunk_start '{}' chunk_end '{}' chunk_n '{}' total_chunk_size '{}' bulk_size: '{}' offset: '{}'",
__func__, in.path, in.chunk_start, in.chunk_end, in.chunk_n,
in.total_chunk_size, bulk_size, in.offset);
if(GKFS_DATA->enable_stats()) {
GKFS_DATA->stats()->add_value_size(
gkfs::utils::Stats::SizeOp::read_size, bulk_size);
}
#ifdef GKFS_ENABLE_AGIOS
int* data;
......@@ -619,7 +618,13 @@ rpc_srv_read(hg_handle_t handle) {
*/
GKFS_DATA->spdlogger()->debug("{}() Sending output response, err: {}",
__func__, out.err);
return gkfs::rpc::cleanup_respond(&handle, &in, &out, &bulk_handle);
auto handler_ret =
gkfs::rpc::cleanup_respond(&handle, &in, &out, &bulk_handle);
if(GKFS_DATA->enable_stats()) {
GKFS_DATA->stats()->add_value_size(
gkfs::utils::Stats::SizeOp::read_size, bulk_size);
}
return handler_ret;
}
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment