Skip to content

Release notes for Gluster 4.0.0

The Gluster community celebrates 13 years of development with this latest release, Gluster 4.0. This release enables improved integration with containers, an enhanced user experience, and a next-generation management framework. The 4.0 release helps cloud-native app developers choose Gluster as the default scale-out distributed file system.

A selection of the important features and changes are documented on this page. A full list of bugs that have been addressed is included further below.

Announcements

  1. As 3.13 was a short term maintenance release, features which have been included in that release are available with 4.0.0 as well.These features may be of interest to users upgrading to 4.0.0 from older than 3.13 releases. The 3.13 release notes captures the list of features that were introduced with 3.13.

NOTE: As 3.13 was a short term maintenance release, it will reach end of life (EOL) with the release of 4.0.0. (reference)

  1. Releases that receive maintenance updates post 4.0 release are, 3.10, 3.12, 4.0 (reference)

  2. With this release, the CentOS storage SIG will not build server packages for CentOS6. Server packages will be available for CentOS7 only. For ease of migrations, client packages on CentOS6 will be published and maintained.

NOTE: This change was announced here

Major changes and features

Features are categorized into the following sections,

Management

GlusterD2 (GD2) is new management daemon for Gluster-4.0. It is a complete rewrite, with all new internal core frameworks, that make it more scalable, easier to integrate with and has lower maintenance requirements.

A quick start guide is available to get started with GD2.

GD2 in Gluster-4.0 is a technical preview release. It is not recommended for production use. For the current release glusterd is the preferred management daemon. More information is available in the Limitations section.

GD2 brings many new changes and improvements, that affect both users and developers.

Features

The most significant new features brought by GD2 are below.

Native REST APIs

GD2 exposes all of its management functionality via ReST APIs. The ReST APIs accept and return data encoded in JSON. This enables external projects such as Heketi to be better integrated with GD2.

CLI

GD2 provides a new CLI, glustercli, built on top of the ReST API. The CLI retains much of the syntax of the old gluster command. In addition we have,

  • Improved CLI help messages
  • Auto completion for sub commands
  • Improved CLI error messages on failure
  • Framework to run glustercli from outside the Cluster.

In this release, the following CLI commands are available,

  • Peer management
  • Peer Probe/Attach
  • Peer Detach
  • Peer Status
  • Volume Management
  • Create/Start/Stop/Delete
  • Expand
  • Options Set/Get
  • Bitrot
  • Enable/Disable
  • Configure
  • Status
  • Geo-replication
  • Create/Start/Pause/Resume/Stop/Delete
  • Configure
  • Status
Configuration store

GD2 uses etcd to store the Gluster pool configuration, which solves the config synchronize issues reported against the Gluster management daemon.

GD2 embeds etcd, and automatically creates and manages an etcd cluster when forming the trusted storage pool. If required, GD2 can also connect to an already existing etcd cluster.

Transaction Framework

GD2 brings a newer more flexible distributed framework, to help it perform actions across the storage pool. The transaction framework provides better control for choosing peers for a Gluster operation and it also provides a mechanism to roll back the changes when something goes bad.

Volume Options

GD2 intelligently fetches and builds the list of volume options by directly reading xlators *.so files. It does required validations during volume set without maintaining duplicate list of options. This avoids lot of issues which can happen due to mismatch in the information between Glusterd and xlator shared libraries.

Volume options listing is also improved, to clearly distinguish configured options and default options. Work is still in progress to categorize these options and tune the list for better understanding and ease of use.

Volfiles generation and management

GD2 has a newer and better structured way for developers to define volfile structure. The new method reduces the effort required to extend graphs or add new graphs.

Also, volfiles are generated in single peer and stored in etcd store. This is very important for scalability since Volfiles are not stored in every node.

Security

GD2 supports TLS for ReST and internal communication, and authentication for the ReST API.If enabled, ReST APIs are currently limited to CLI, or the users who have access to the Token file present in $GLUSTERD2_WORKDIR/auth file.

Features integration - Self Heal

Self Heal feature integrated for the new Volumes created using Glusterd2.

Geo-replication

With GD2 integration Geo-replication setup becomes very easy. If Master and Remote volume are available and running, Geo-replication can be setup with just a single command.

glustercli geo-replication create <mastervol> <remotehost>::<remotevol>

Geo-replication status is improved, Status clearly distinguishes the multiple session details in status output.

Order of status rows was not predictable in earlier releases. It was very difficult to correlate the Geo-replication status with Bricks. With this release, Master worker status rows will always match with Bricks list in Volume info.

Status can be checked using,

glustercli geo-replication status
glustercli geo-replication status <mastervol> <remotehost>::<remotevol>

All the other commands are available as usual.

Limitations:

  • On Remote nodes, Geo-replication is not yet creates the log directories. As a workaround, create the required log directories in Remote Volume nodes.
Events APIs

Events API feature is integrated with GD2. Webhooks can be registered to listen for GlusterFS events. Work is in progress for exposing an REST API to view all the events happened in last 15 minutes.

Limitations

Backward compatibility

GD2 is not backwards compatible with the older GlusterD. Heterogeneous clusters running both GD2 and GlusterD are not possible.

GD2 retains compatibility with Gluster-3.x clients. Old clients will still be able to mount and use volumes exported using GD2.

Upgrade and migration

GD2 does not support upgrade from Gluster-3.x releases, in Gluster-4.0. Gluster-4.0 will be shipping with both GD2 and the existing GlusterD. Users will be able to upgrade to Gluster-4.0 while continuing to use GlusterD.

In Gluster-4.1, users will be able to migrate from GlusterD to GD2. Further, upgrades from Gluster-4.1 running GD2 to higher Gluster versions would be supported from release 4.1 onwards.

Post Gluster-4.1, GlusterD would be maintained for a couple of releases, post which the only option to manage the cluster would be GD2.

Missing and partial commands

Not all commands from GlusterD, have been implemented for GD2. Some have been only partially implemented. This means not all GlusterFS features are available in GD2. We aim to bring most of the commands back in Gluster-4.1.

Recovery from full shutdown

With GD2, the process of recovery from a situation of a full cluster shutdown requires reading the document available as well as some expertise.

Known Issues

2-node clusters

GD2 does not work well in 2-node clusters. Two main issues exist in this regard.

  • Restarting GD2 fails in 2-node clusters #352
  • Detach fails in 2-node clusters #332

So it is recommended right now to run GD2 only in clusters of 3 or larger.

Other issues

Other known issues are tracked on github issues right now. Please file any other issue you find on github issues.

Monitoring

Till date, the absence of support for live monitoring on GlusterFS created constrained user experience for both users and developers. Statedump is useful for debugging, but is heavy for live monitoring.

Further, the existence of debug/io-stats translator was not known to many and gluster volume profile was not recommended as it impacted performance.

In this release, GlusterFS enables a lightweight method to access internal information and avoids the performance penalty and complexities of previous approaches.

1. Metrics collection across every FOP in every xlator

Notes for users: Now, Gluster now has in-built latency measures in the xlator abstraction, thus enabling capture of metrics and usage patterns across workloads.

These measures are currently enabled by default.

Limitations: This feature is auto-enabled and cannot be disabled.

2. Monitoring support

Notes for users: Currently, the only project which consumes metrics and provides basic monitoring is glustermetrics, which provides a good idea on how to utilize the metrics dumped from the processes.

Users can send SIGUSR2 signal to the process to dump the metrics, in /var/run/gluster/metrics/ directory.

Limitations: Currently core gluster stack and memory management systems provide metrics. A framework to generate more metrics is present for other translators and core components. However, additional metrics are not added in this release.

Performance

1. EC: Make metadata [F]GETXATTR operations faster

Notes for users: Disperse translator has made performance improvements to the [F]GETXATTR operation. Workloads involving heavy use of extended attributes on files and directories, will gain from the improvements made.

2. Allow md-cache to serve nameless lookup from cache

Notes for users: The md-cache translator is enhanced to cache nameless lookups (typically seen with NFS workloads). This helps speed up overall operations on the volume reducing the number of lookups done over the network. Typical workloads that will benefit from this enhancement are,

  • NFS based access
  • Directory listing with FUSE, when ACLs are enabled

3. md-cache: Allow runtime addition of xattrs to the list of xattrs that md-cache caches

Notes for users: md-cache was enhanced to cache extended attributes of a file or directory, for gluster specific attributes. This has now been enhanced to cache user provided attributes (xattrs) as well.

To add specific xattrs to the cache list, use the following command:

# gluster volume set <volname> xattr-cache-list "<xattr-name>,<xattr-name>,..."

Existing options, such as "cache-samba-metadata" "cache-swift-metadata" continue to function. The new option "xattr-cache-list" appends to the list generated by the existing options.

Limitations: Setting this option overwrites the previous value set for this option. The append to the existing list of xattr is not supported with this release.

4. Cache last stripe of an EC volume while write is going on

Notes for users: Disperse translator now has the option to retain a write-through cache of the last write stripe. This helps in improved small append sequential IO patterns by reducing the need to read a partial stripe for appending operations.

To enable this use,

# gluster volume set <volname> disperse.stripe-cache <N>

Where, is the number of stripes to cache.

5. tie-breaker logic for blocking inodelks/entrylk in SHD

Notes for users: Self-heal deamon locking has been enhanced to identify situations where an selfheal deamon is actively working on an inode. This enables other selfheal daemons to proceed with other entries in the queue, than waiting on a particular entry, thus preventing starvation among selfheal threads.

6. Independent eager-lock options for file and directory accesses

Notes for users: A new option named 'disperse.other-eager-lock' has been added to make it possible to have different settings for regular file accesses and accesses to other types of files (like directories).

By default this option is enabled to ensure the same behavior as the previous versions. If you have multiple clients creating, renaming or removing files from the same directory, you can disable this option to improve the performance for these users while still keeping best performance for file accesses.

7. md-cache: Added an option to cache statfs data

Notes for users: This can be controlled with option performance.md-cache-statfs

gluster volume set <volname> performance.md-cache-statfs <on|off>

8. Improved disperse performance due to parallel xattrop updates

Notes for users: Disperse translator has been optimized to perform xattrop update operation in parallel on the bricks during self-heal to improve performance.

Geo-replication

1. Geo-replication: Improve gverify.sh logs

Notes for users: gverify.sh is the script which runs during geo-rep session creation which validates pre-requisites. The logs have been improved and locations are changed as follows,

  1. Slave mount log file is changed from <logdir>/geo-replication-slaves/slave.log to, <logdir>/geo-replication/gverify-slavemnt.log
  2. Master mount log file is separated from the slave log file under, <logdir>/geo-replication/gverify-mastermnt.log

2. Geo-rep: Cleanup stale (unusable) XSYNC changelogs.

Notes for users: Stale xsync logs were not cleaned up, causing accumulation of these on the system. This change cleans up the stale xsync logs, if geo-replication has to restart from a faulty state.

Standalone

1. Ability to force permissions while creating files/directories on a volume

Notes for users: Options have been added to the posix translator, to override default umask values with which files and directories are created. This is particularly useful when sharing content by applications based on GID. As the default mode bits prevent such useful sharing, and supersede ACLs in this regard, these options are provided to control this behavior.

Command usage is as follows:

# gluster volume set <volume name> storage.<option-name> <value>

The valid <value> ranges from 0000 to 0777

<option-name> are:

  • create-mask
  • create-directory-mask
  • force-create-mode
  • force-create-directory

Options "create-mask" and "create-directory-mask" are added to remove the mode bits set on a file or directory when its created. Default value of these options is 0777. Options "force-create-mode" and "force-create-directory" sets the default permission for a file or directory irrespective of the clients umask. Default value of these options is 0000.

2. Replace MD5 usage to enable FIPS support

Notes for users: Previously, if Gluster was run on a FIPS enabled system, it used to crash because MD5 is not FIPS compliant and Gluster consumes MD5 checksum in various places like self-heal and geo-replication. By replacing MD5 with a FIPS complaint SHA256, Gluster no longer crashes on a FIPS enabled system.

However, in order for AFR self-heal to work correctly during rolling upgrade to 4.0, we have tied this to a volume option called fips-mode-rchecksum.

gluster volume set <VOLNAME> fips-mode-rchecksum on has to be performed post upgrade to change the defaults from MD5 to SHA256. Post this gluster processes will run clean on a FIPS enabled system.

NOTE: Once glusterfs 3.x is EOL'ed, the usage of the option to control this change will be removed.

Limitations Snapshot feature in Gluster still uses md5 checksums, hence running in FIPS compliant systems requires that the snapshot feature is not used.

3. Dentry fop serializer xlator on brick stack

Notes for users: This feature strengthens consistency of the file system, trading it for some performance and is strongly suggested for workloads where consistency is required.

In previous releases the meta-data about the files and directories shared across the clients were not always consistent when the use-cases/workloads involved a large number of renames, frequent creations and deletions. They do eventually become consistent, but a large proportion of applications are not built to handle eventual consistency.

This feature can be enabled as follows,

# gluster volume set <volname> features.sdfs enable

Limitations: This feature is released as a technical preview, as performance implications are not known completely.

4. Add option to disable nftw() based deletes when purging the landfill directory

Notes for users: The gluster brick processes use an optimized manner of deleting entire sub-trees using the nftw call. With this release, an option is being added to toggle this behavior in cases where this optimization is not desired.

This is not an exposed option, and needs to be controlled using the volume graph. Adding the disable-landfill-purge option to the storage/posix translator helps toggle this feature.

The default is always enabled, as in the older releases.

Notes for users: Added an option to POSIX that limits the number of hard links that can be created against an inode (file). This helps when there needs to be a different hardlink limit than what the local FS provides for the bricks.

The option to control this behavior is,

# gluster volume set <volname> storage.max-hardlinks <N>

Where, <N> is 0-0xFFFFFFFF. If the local file system that the brick is using has a lower limit than this setting, that would be honored.

Default is set to 100, setting this to 0 turns it off and leaves it to the local file system defaults. Setting it to 1 turns off hard links.

6. Enhancements for directory listing in readdirp

Notes for users: Prior to this release, rebalance performed a fix-layout on a directory before healing its subdirectories. If there were a lot of subdirs, it could take a while before all subdirs were created on the newly added bricks. This led to some missed directory listings.

This is changed with this release to process children directories before the parents, thereby changing the way rebalance acts (files within sub directories are migrated first) and also resolving the directory listing issue.

7. Rebalance skips migration of file if it detects writes from application

Notes for users: Rebalance process skips migration of file if it detects writes from application. To force migration even in the presence of writes from application to file, "cluster.force-migration" has to be turned on, which is off by default.

The option to control this behavior is,

# gluster volume set <volname> cluster.force-migration <on/off>

Limitations: It is suggested to run remove-brick with cluster.force-migration turned off. This results in files which have writes from clients being skipped during rebalance. It is suggested to copy these files manually to a Gluster mount post remove brick commit is performed.

Rebalancing files with active write IO to them has a chance of data corruption.

1. xlators should not provide init(), fini() and others directly, but have class_methods

Notes for developers: This release brings in a new unified manner of defining xlator methods. Which avoids certain unwanted side-effects of the older method (like having to have certain symbols being defined always), and helps a cleaner single point registration mechanism for all xlator methods.

The new method, needs just a single symbol in the translator code to be exposed, which is named xlator_api.

The elements of this structure is defined here and an example usage of the same can be seen here.

The older mechanism is still supported, but not preferred.

2. Framework for distributed testing

Notes for developers: A new framework for running the regression tests for Gluster is added. The README has details on how to use the same.

3. New API for acquiring mandatory locks

Notes for developers: The current API for byte-range locks glfs_posix_lock doesn't allow applications to specify whether it is advisory or mandatory type lock. This particular change is to introduce an extended byte-range lock API with an additional argument for including the byte-range lock mode to be one among advisory(default) or mandatory.

Refer to the header for details on how to use this API.

A sample test program can be found here that also helps in understanding the usage of this API.

4. New on-wire protocol (XDR) needed to support iattx and cleaner dictionary structure

Notes for developers: With changes in the code to adapt to a newer iatt structure, and stricter data format enforcement within dictionaries passed across the wire, and also as a part of reducing technical debt around the RPC layer, this release introduces a new RPC Gluster protocol version (4.0.0).

Typically this does not impact any development, other than to ensure that newer RPCs that are added would need to be on the 4.0.0 version of the protocol and dictionaries on the wire need to be better encoded.

The newer iatt structure can be viewed here.

An example of better encoding dictionary values for wire transfers can be seen here.

Here is some additional information on Gluster RPC programs for the inquisitive.

5. The protocol xlators should prevent sending binary values in a dict over the networks

Notes for developers: Dict data over the wire in Gluster was sent in binary. This has been changed with this release, as the on-wire protocol wire is also new, to send XDR encoded dict values across. In the future, any new dict type needs to also handle the required XDR encoding of the same.

6. Translator to handle 'global' options

Notes for developers: GlusterFS process has around 50 command line arguments to itself. While many of the options are initial settings, many others can change its value in volume lifetime. Prior to this release there was no way to change a setting, other than restarting the process for many of these options.

With the introduction of global option translator, it is now possible to handle these options without restarts.

If contributing code that adds to the process options, strongly consider adding the same to the global option translator. An example is provided here.

Major issues

None

Bugs addressed

Bugs addressed since release-3.13.0 are listed below.

  • #827334: gfid is not there in the fsetattr and rchecksum requests being sent from protocol client
  • #1336889: Gluster's XDR does not conform to RFC spec
  • #1369028: rpc: Change the way client uuid is built
  • #1370116: Tests : Adding a test to check for inode leak
  • #1428060: write-behind: Allow trickling-writes to be configurable, fix usage of page_size and window_size
  • #1430305: Fix memory leak in rebalance
  • #1431955: [Disperse] Implement open fd heal for disperse volume
  • #1440659: Add events to notify disk getting fill
  • #1443145: Free runtime allocated resources upon graph switch or glfs_fini()
  • #1446381: detach start does not kill the tierd
  • #1467250: Accessing a file when source brick is down results in that FOP being hung
  • #1467614: Gluster read/write performance improvements on NVMe backend
  • #1469487: sys_xxx() functions should guard against bad return values from fs
  • #1471031: dht_(f)xattrop does not implement migration checks
  • #1471753: [disperse] Keep stripe in in-memory cache for the non aligned write
  • #1474768: The output of the "gluster help" command is difficult to read
  • #1479528: Rebalance estimate(ETA) shows wrong details(as intial message of 10min wait reappears) when still in progress
  • #1480491: tests: Enable geo-rep test cases
  • #1482064: Bringing down data bricks in cyclic order results in arbiter brick becoming the source for heal.
  • #1488103: Rebalance fails on NetBSD because fallocate is not implemented
  • #1492625: Directory listings on fuse mount are very slow due to small number of getdents() entries
  • #1496335: Extreme Load from self-heal
  • #1498966: Test case ./tests/bugs/bug-1371806_1.t is failing
  • #1499566: [Geo-rep]: Directory renames are not synced in hybrid crawl
  • #1501054: Structured logging support for Gluster logs
  • #1501132: posix health check should validate time taken between write timestamp and read timestamp cycle
  • #1502610: disperse eager-lock degrades performance for file create workloads
  • #1503227: [RFE] Changelog option in a gluster volume disables with no warning if geo-rep is configured
  • #1505660: [QUOTA] man page of gluster should be updated to list quota commands
  • #1506104: gluster volume splitbrain info needs to display output of each brick in a stream fashion instead of buffering and dumping at the end
  • #1506140: Add quorum checks in post-op
  • #1506197: [Parallel-Readdir]Warning messages in client log saying 'parallel-readdir' is not recognized.
  • #1508898: Add new configuration option to manage deletion of Worm files
  • #1508947: glusterfs: Include path in pkgconfig file is wrong
  • #1509189: timer: Possible race condition between gftimer* routines
  • #1509254: snapshot remove does not cleans lvm for deactivated snaps
  • #1509340: glusterd does not write pidfile correctly when forking
  • #1509412: Change default versions of certain features to 3.13 from 4.0
  • #1509644: rpc: make actor search parallel
  • #1509647: rpc: optimize fop program lookup
  • #1509845: In distribute volume after glusterd restart, brick goes offline
  • #1510324: Master branch is broken because of the conflicts
  • #1510397: Compiler atomic built-ins are not correctly detected
  • #1510401: fstat returns ENOENT/ESTALE
  • #1510415: spurious failure of tests/bugs/glusterd/bug-1345727-bricks-stop-on-no-quorum-validation.t
  • #1510874: print-backtrace.sh failing with cpio version 2.11 or older
  • #1510940: The number of bytes of the quota specified in version 3.7 or later is incorrect
  • #1511310: Test bug-1483058-replace-brick-quorum-validation.t fails inconsistently
  • #1511339: In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon
  • #1512437: parallel-readdir = TRUE prevents directories listing
  • #1512451: Not able to create snapshot
  • #1512455: glustereventsd hardcodes working-directory
  • #1512483: Not all files synced using geo-replication
  • #1513692: io-stats appends now instead of overwriting which floods filesystem with logs
  • #1513928: call stack group list leaks
  • #1514329: bug-1247563.t is failing on master
  • #1515161: Memory leak in locks xlator
  • #1515163: centos regression fails for tests/bugs/replicate/bug-1292379.t
  • #1515266: Prevent ec from continue processing heal operations after PARENT_DOWN
  • #1516206: EC DISCARD doesn't punch hole properly
  • #1517068: Unable to change the Slave configurations
  • #1517554: help for volume profile is not in man page
  • #1517633: Geo-rep: access-mount config is not working
  • #1517904: tests/bugs/core/multiplex-limit-issue-151.t fails sometimes in upstream master
  • #1517961: Failure of some regression tests on Centos7 (passes on centos6)
  • #1518508: Change GD_OP_VERSION to 3_13_0 from 3_12_0 for RFE https://bugzilla.redhat.com/show_bug.cgi?id=1464350
  • #1518582: Reduce lock contention on fdtable lookup
  • #1519598: Reduce lock contention on protocol client manipulating fd
  • #1520245: High mem/cpu usage, brick processes not starting and ssl encryption issues while testing scaling with multiplexing (500-800 vols)
  • #1520758: [Disperse] Add stripe in cache even if file/data does not exist
  • #1520974: Compiler warning in dht-common.c because of a switch statement on a boolean
  • #1521013: rfc.sh should allow custom remote names for ORIGIN
  • #1521014: quota_unlink_cbk crashes when loc.inode is null
  • #1521116: Absorb all test fixes from 3.8-fb branch into master
  • #1521213: crash when gifs_set_logging is called concurrently
  • #1522651: rdma transport may access an obsolete item in gf_rdma_device_t->all_mr, and causes glusterfsd/glusterfs process crash.
  • #1522662: Store allocated objects in the mem_acct
  • #1522775: glusterd consuming high memory
  • #1522847: gNFS Bug Fixes
  • #1522950: io-threads is unnecessarily calling accurate time calls on every FOP
  • #1522968: glusterd bug fixes
  • #1523295: md-cache should have an option to cache STATFS calls
  • #1523353: io-stats bugs and features
  • #1524252: quick-read: Discard cache for fallocate, zerofill and discard ops
  • #1524365: feature/bitrot: remove internal xattrs from lookup cbk
  • #1524816: heketi was not removing the LVs associated with Bricks removed when Gluster Volumes were deleted
  • #1526402: glusterd crashes when 'gluster volume set help' is executed
  • #1526780: ./run-tests-in-vagrant.sh fails because of disabled Gluster/NFS
  • #1528558: /usr/sbin/glusterfs crashing on Red Hat OpenShift Container Platform node
  • #1528975: Fedora 28 (Rawhide) renamed the pyxattr package to python2-pyxattr
  • #1529440: Files are not rebalanced if destination brick(available size) is of smaller size than source brick(available size)
  • #1529463: JWT support without external dependency
  • #1529480: Improve geo-replication logging
  • #1529488: entries not getting cleared post healing of softlinks (stale entries showing up in heal info)
  • #1529515: AFR: 3-way-replication: gluster volume set cluster.quorum-count should validate max no. of brick count to accept
  • #1529883: glusterfind is extremely slow if there are lots of changes
  • #1530281: glustershd fails to start on a volume force start after a brick is down
  • #1530910: Use after free in cli_cmd_volume_create_cbk
  • #1531149: memory leak: get-state leaking memory in small amounts
  • #1531987: increment of a boolean expression warning
  • #1532238: Failed to access volume via Samba with undefined symbol from socket.so
  • #1532591: Tests: Geo-rep tests are failing in few regression machines
  • #1533594: EC test fails when brick mux is enabled
  • #1533736: posix_statfs returns incorrect f_bfree values if brick is full.
  • #1533804: readdir-ahead: change of cache-size should be atomic
  • #1533815: Mark ./tests/basic/ec/heal-info.t as bad
  • #1534602: FUSE reverse notificatons are not written to fuse dump
  • #1535438: Take full lock on files in 3 way replication
  • #1535772: Random GlusterFSD process dies during rebalance
  • #1536913: tests/bugs/cli/bug-822830.t fails on Centos 7 and locally
  • #1538723: build: glibc has removed legacy rpc headers and rpcgen in Fedora28, use libtirpc
  • #1539657: Georeplication tests intermittently fail
  • #1539701: gsyncd is running gluster command to get config file path is not required
  • #1539842: GlusterFS 4.0.0 tracker
  • #1540438: Remove lock recovery logic from client and server protocol translators
  • #1540554: Optimize glusterd_import_friend_volume code path
  • #1540882: Do lock conflict check correctly for wait-list
  • #1541117: sdfs: crashes if the features is enabled
  • #1541277: dht_layout_t leak in dht_populate_inode_for_dentry
  • #1541880: Volume wrong size
  • #1541928: A down brick is incorrectly considered to be online and makes the volume to be started without any brick available
  • #1542380: Changes to self-heal logic w.r.t. detecting of split-brains
  • #1542382: Add quorum checks in post-op
  • #1542829: Too many log messages about dictionary and options
  • #1543487: dht_lookup_unlink_of_false_linkto_cbk fails with "Permission denied"
  • #1543706: glusterd fails to attach brick during restart of the node
  • #1543711: glustershd/glusterd is not using right port when connecting to glusterfsd process
  • #1544366: Rolling upgrade to 4.0 is broken
  • #1544638: 3.8 -> 3.10 rolling upgrade fails (same for 3.12 or 3.13) on Ubuntu 14
  • #1545724: libgfrpc does not export IPv6 RPC methods even with --with-ipv6-default
  • #1547635: add option to bulld rpm without server
  • #1547842: Typo error in __dht_check_free_space function log message
  • #1548264: [Rebalance] "Migrate file failed: : failed to get xattr [No data available]" warnings in rebalance logs
  • #1548271: DHT calls dht_lookup_everywhere for 1xn volumes
  • #1550808: memory leak in pre-op in replicate volumes for every write
  • #1551112: Rolling upgrade to 4.0 is broken
  • #1551640: GD2 fails to dlopen server xlator
  • #1554077: 4.0 clients may fail to convert iatt in dict when recieving the same from older (< 4.0) servers