- Oct 16, 2023
-
-
Julius Athenstaedt authored
-
- Oct 02, 2023
-
-
Alberto Miranda authored
Resolve "Remove Parallax Testing" This MR updates testing for the new docker CI environment: - Removes Parallax Testing to make faster tests (we will activate again if needed, or changes are applied) - Adds cstdint for uint types (gcc 10..) - Updates python sh library/package to a newer version. This solves and produces some changes. - Solved a bug in sfind Closes #268 Closes #268 See merge request !169
-
- Sep 29, 2023
- Sep 28, 2023
-
-
Ramon Nou authored
-
- Sep 27, 2023
- Sep 22, 2023
-
-
Ramon Nou authored
-
- Jun 20, 2023
-
-
Marc Vef authored
Resolve "Support Spack and others" ### Usage information Download Spack and setup environment: ```bash git clone -c feature.manyFiles=true https://github.com/spack/spack.git . spack/share/spack/setup-env.sh ``` Add GekkoFS Spack repository to Spack: ```bash spack repo add gekkofs/scripts/spack ``` Check that Spack can find GekkoFS: ``` spack info gekkofs ``` Install GekkoFS (and run optional tests). Check `spack info gekkofs` for available option and versions: ```bash spack install gekkofs # for installing tests dependencies and running tests spack install -v --test=root gekkofs +tests ``` Load GekkoFS into environment: ``` spack load gekkofs ``` If you want to use the latest developer branch of GekkoFS: ``` spack install gekkofs@latest ``` The default is using version 0.9.1 the last stable release. ### TODO - [x] Base Spack functionality, versions, and configuration support - [x] Documentation - [x] Advanced functionality, more detailed configuration support, e.g., Parallax and Prometheus - [x] More easy way to get path to client library - [x] Add GekkoFS client wrapper for `LD_PRELOAD` - [ ] Add final version to main Spack repository if possible. (Not possible right now as it not clear how 3rd party libraries should be treated. Closes #58 Closes #58 See merge request !137
-
- Jun 19, 2023
-
-
Marc Vef authored
Arm Support This adds ARM support. It adds a new profile to download the syscall_intercept that supports ARM (fork). The modifications doesn't use ARM define, as it is not needed, we only need to check if a syscall exists or not. Closes #244 We can close !127 also, I think that this merge updates and solves some issues. Closes #244 See merge request !160
-
Marc Vef authored
Resolve "Support for (parallel) append operations" This MR adds (parallel) append support for write operations. There was already some append code available that was run for each `pwrite` when the file size was updated. As a result, parts of strings were serialized and deserialized within RocksDB's merge operation even if not needed. Previously, `open()` was returning `ENOTSUP` when `O_APPEND` was used. When removing this statement, append was not functional due to how size updates and the append case worked. Overall, `gkfs_pwrite()` which first updates the file size and then writes the file was quite messy with unused return values and arguments. Further, the server calculated the updated size without regard on what occurred in the KV store. Therefore, as part of this MR, the entire update size process within `pwrite()` was refactored. Parallel appends are achieved by hooking into RocksDB's `Merge Operator` which is triggered at some point (e.g., during `Get()`). Without append being used, the offset is known to the client already and therefore the file size is updated to `offset + count` set in `gkfs_pwrite()`. There is no further coordination required since overlapping offsets are the user's responsibility. The code path for non-append operations was slightly optimized but largely remains the same. Append operations are treated differently because it is not clear during a write operation where a process calling `write()` should start writing. Using the EOF information that is loaded during open may be outdated when multiple processes try to append at the same time -> causing a race condition. Since the size update on the daemon is atomic, a process (updating the size before performing a write) can be reserved a corresponding byte interval `[EOF, EOF + count]`. Now, calling `Merge()` on RocksDB does not trigger a Merge operation since multiple Merges are batched before the operation is run. For append, the Merge operation is forced by running `Get()` on RocksDB. The corresponding Merge operation then responds the starting write offset to the updating size process. Therefore, appends are more expensive than non-appends. Lastly, some missing documentation was added. As reported, this MR adds support for the DASI application, used in IO-SEA. Note: This MR does not consider failing writes which would require us to collapse a reserved interval and tie up the hole in the file. Closes #254 Closes #12 Closes #12 and #254 See merge request !164
-
- Jun 12, 2023
-