Skip to content

Release notes for Gluster 3.11.0

This is a major Gluster release that includes some substantial changes. The features revolve around, improvements to small file workloads, SE Linux support, Halo replication enhancement from Facebook, some usability and performance improvements, among other bug fixes.

The most notable features and changes are documented on this page. A full list of bugs that have been addressed is included further below.

Major changes and features

Switched to storhaug for ganesha and samba high availability

Notes for users:

High Availability (HA) support for NFS-Ganesha (NFS) and Samba (SMB) is managed by Storhaug. Like the old HA implementation, Storhaug uses Pacemaker and Corosync to manage Virtual (floating) IP addresses (VIPs) and fencing. See https://github.com/linux-ha-storage/storhaug.

Storhaug packages are available in Fedora and for several popular Linux distributions from https://download.gluster.org/pub/gluster/storhaug/

Note: Storhaug does not dictate which fencing solution should be used. There are many to choose from in most popular Linux distributions. Choose the one the best fits your environment and use it.

Added SELinux support for Gluster Volumes

Notes for users:

A new xlator has been introduced (features/selinux) to allow setting the extended attribute (security.selinux) that is needed to support SELinux on Gluster volumes. The current ability to enforce the SELinux policy on the Gluster Storage servers prevents setting the extended attribute for use on the client side. The new translator converts the client-side SELinux extended attribute to a Gluster internal representation (the trusted.glusterfs.selinux extended attribute) to prevent problems.

This feature is intended to be the base for implementing Labelled-NFS in NFS-Ganesha and SELinux support for FUSE mounts in the Linux kernel.

Limitations:

  • The Linux kernel does not support mounting of FUSE filesystems with SELinux support, yet.
  • NFS-Ganesha does not support Labelled-NFS, yet.

Known Issues:

  • There has been limited testing, because other projects can not consume the functionality yet without being part of a release. So far, no problems have been observed, but this might change when other projects start to seriously use this.

Several memory leaks are fixed in gfapi during graph switches

Notes for users:

Gluster API (or gfapi), has had a few memory leak issues arising specifically during changes to volume graphs (volume topology or options). A few of these are addressed in this release, and more work towards ironing out the pending leaks are in the works across the next few releases.

Limitations:

  • There are still a few leaks to be addressed when graph switches occur

Notes for users:

The get-state CLI output now optionally accommodates client related information corresponding locally running bricks as obtained from gluster volume status <volname>|all clients. Getting the client details is a is a relatively more costly operation and these details will only be added to the output if the get-state command is invoked with the 'detail' option. The following is the updated usage for the get-state command:

 # gluster get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]] [detail]

Other than client details, capacity related information for respective local bricks as obtained from gluster volume status <volname>|all detail has also been added to the get-state output.

Limitations:

  • Information for non-local bricks and clients connected to non-local bricks won't be available. This is a known limitation of the get-state command, since get-state command doesn't provide information on non-local bricks.

Ability to serve negative lookups from cache has been added

Notes for users:

Before creating / renaming any file, lookups (around, 5-6 when using the SMB protocol) are sent to verify if the file already exists. The negative lookup cache, serves these lookups from the cache when possible, thus increasing the create/rename performance when using SMB based access to a gluster volume.

Execute the following commands to enable negative-lookup cache:

# gluster volume set <volname> features.cache-invalidation on
# gluster volume set <volname> features.cache-invalidation-timeout 600
# gluster volume set <VOLNAME> nl-cache on

Limitations

  • This feature is supported only for SMB access, for this release

New xlator to help developers detecting resource leaks has been added

Notes for users:

This is intended as a developer feature, and hence there is no direct user impact.

For developers, the sink xlator provides ways to help detect memory leaks in gfapi and any xlator in between the API and the sink xlator.

More details can be found in this thread on the gluster-devel lists

Feature for metadata-caching/small file performance is production ready

Notes for users:

Over the course of releases several fixes and enhancements have been made to the mdcache xlator, to improve performance of small file workloads. As a result, with this release we are announcing this feature to be production ready.

In order to improve the performance of directory operations of Gluster volumes, the maximum metadata (stat, xattr) caching time on the client side is increased to 10 minutes, without compromising on the consistency of the cache. Significant performance improvements can be achieved in the following workloads on FUSE and SMB access, by enabling metadata caching:

  • Listing of directories (recursive)
  • Creating files
  • Deleting files
  • Renaming files

To enable metadata caching execute the following commands:

# gluster volume set group metadata-cache
# gluster volume set network.inode-lru-limit <n>

\<n>, is set to 50000 by default. It should be increased if the number of concurrently accessed files in the volume is very high. Increasing this number increases the memory footprint of the brick processes.

"Parallel Readdir" feature introduced in 3.10.0 is production ready

Notes for users:

This feature was introduced in 3.10 and was experimental in nature. Over the course of 3.10 minor releases and 3.11.0 release, this feature has been stabilized and is ready for use in production environments.

For further details refer: 3.10.0 release notes

Object versioning is enabled only if bitrot is enabled

Notes for users:

Object versioning was turned on by default on brick processes by the bitrot xlator. This caused, setting and looking up of additional extended attributes on the backed file system for every object, even when not actively using bitrot. This at times caused high CPU utilization on the brick processes.

To fix this, object versioning is disabled by default, and is only enabled as a part of enabling the bitrot option.

Distribute layer provides more robust transactions during directory namespace operations

Notes for users:

Distribute layer in Gluster, creates and maintains directories in all subvolumes and as a result operations involving creation/manipulation/deletion of these directories needed better transaction support to ensure consistency of the file system.

This transaction support is now implemented in the distribute layer, thus ensuring better consistency of the file system as a whole, when dealing with racing operations, operating on the same directory object.

gfapi extended readdirplus API has been added

Notes for users:

An extended readdirplus API glfs_xreaddirplus is added to get extra information along with readdirplus results on demand. This is useful for the applications (like NFS-Ganesha which needs handles) to retrieve more information along with stat in a single call, thus improving performance of work-loads involving directory listing.

The API syntax and usage can be found in glfs.h header file.

Limitations:

  • This API currently has support to only return stat and handles (glfs_object) for each dirent of the directory, but can be extended in the future.

Improved adoption of standard refcounting functions across the code

Notes for users:

This change does not impact users, it is an internal code cleanup activity that ensures that we ref count in a standard manner, thus avoiding unwanted bugs due to different implementations of the same.

Known Issues:

  • This standardization started with this release and is expected to continue across releases.

Performance improvements to rebalance have been made

Notes for users:

Both crawling and migration improvement has been done in rebalance. The crawler is optimized now to split the migration load across replica and ec nodes. Prior to this change, in case the replicating bricks are distributed over two nodes, then only one node used to do the migration. With the new optimization both the nodes divide the load among each other giving boost to migration performance. And also there have been some optimization to avoid redundant network operations (or RPC calls) in the process of migrating a file.

Further, file migration now avoids syncop framework and is managed entirely by rebalance threads giving performance boost.

Also, There is a change to throttle settings in rebalance. Earlier user could set three values to rebalance which were "lazy", "normal", "aggressive", which was not flexible enough. To overcome that we have introduced number based throttle settings. User now can set numbers which is an indication of the number of threads rebalance process will work with, thereby translating to the number of files being migrated in parallel.

Halo Replication feature in AFR has been introduced

Notes for users:

Halo Geo-replication is a feature which allows Gluster or NFS clients to write locally to their region (as defined by a latency "halo" or threshold if you like), and have their writes asynchronously propagate from their origin to the rest of the cluster. Clients can also write synchronously to the cluster simply by specifying a halo-latency which is very large (e.g. 10seconds) which will include all bricks. To enable halo feature execute the following commands:

# gluster volume set cluster.halo-enabled yes

You may have to set the following following options to change defaults. cluster.halo-shd-latency: The threshold below which self-heal daemons will consider children (bricks) connected.

cluster.halo-nfsd-latency: The threshold below which NFS daemons will consider children (bricks) connected.

cluster.halo-latency: The threshold below which all other clients will consider children (bricks) connected.

cluster.halo-min-replicas: The minimum number of replicas which are to be enforced regardless of latency specified in the above 3 options. If the number of children falls below this threshold the next best (chosen by latency) shall be swapped in.

FALLOCATE support with EC

Notes for users

Support for FALLOCATE file operation on EC volume is added with this release. EC volumes can now support basic FALLOCATE functionality.

Self-heal window-size control option for EC

Notes for users

Support to control the maximum size of read/write operation carried out during self-heal process has been added with this release. User has to tune 'disperse.self-heal-window-size' option on disperse volume to adjust the size.

Major issues

  1. Expanding a gluster volume that is sharded may cause file corruption

    • Sharded volumes are typically used for VM images, if such volumes are expanded or possibly contracted (i.e add/remove bricks and rebalance) there are reports of VM images getting corrupted.
    • Status of this bug can be tracked here, #1426508
    • Latest series of fixes for the issue (which are present in this release as well) are not showing the previous corruption, and hence the fixes look good, but this is maintained on the watch list nevetheness.

Bugs addressed

Bugs addressed since release-3.10.0 are listed below.

  • #1169302: Unable to take Statedump for gfapi applications
  • #1197308: do not depend on "killall", use "pkill" instead
  • #1198849: Minor improvements and cleanup for the build system
  • #1257792: bug-1238706-daemons-stop-on-peer-cleanup.t fails occasionally
  • #1261689: geo-replication faulty
  • #1264849: RFE : Create trash directory only when its is enabled
  • #1297182: Mounting with "-o noatime" or "-o noexec" causes "nosuid,nodev" to be set as well
  • #1318100: RFE : SELinux translator to support setting SELinux contexts on files in a glusterfs volume
  • #1321578: auth.allow and auth.reject not working host mentioned with hostnames/FQDN
  • #1322145: Glusterd fails to restart after replacing a failed GlusterFS node and a volume has a snapshot
  • #1326219: Make Gluster/NFS an optional component
  • #1328342: [tiering]: gluster v reset of watermark levels can allow low watermark level to have a higher value than hi watermark level
  • #1353952: [geo-rep]: rsync should not try to sync internal xattrs
  • #1356076: DHT doesn't evenly balance files on FreeBSD with ZFS
  • #1359599: BitRot :- bit-rot.signature and bit-rot.version xattr should not be set if bitrot is not enabled on volume
  • #1369393: dead loop in changelog_rpc_server_destroy
  • #1383893: glusterd restart is starting the offline shd daemon on other node in the cluster
  • #1384989: libglusterfs : update correct memory segments in glfs-message-id
  • #1385758: [RFE] Support multiple bricks in one process (multiplexing)
  • #1386578: mounting with rdma protocol fails for tcp,rdma volumes
  • #1389127: build: fixes to build 3.9.0rc2 on Debian (jessie)
  • #1390050: Elasticsearch get CorruptIndexException errors when running with GlusterFS persistent storage
  • #1393338: Rebalance should skip the file if the file has hardlinks instead of failing
  • #1395643: [SELinux] [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
  • #1396004: RFE: An administrator friendly way to determine rebalance completion time
  • #1399196: use attribute(format(printf)) to catch format string errors at compile time
  • #1399593: Obvious typo in cleanup code in rpc_clnt_notify
  • #1401571: bitrot quarantine dir misspelled
  • #1401812: RFE: Make readdirp parallel in dht
  • #1401877: [GANESHA] Symlinks from /etc/ganesha/ganesha.conf to shared_storage are created on the non-ganesha nodes in 8 node gluster having 4 node ganesha cluster
  • #1402254: compile warning unused variable
  • #1402661: Samba crash when mounting a distributed dispersed volume over CIFS
  • #1404424: The data-self-heal option is not honored in AFR
  • #1405628: Socket search code at startup is slow
  • #1408809: [Perf] : significant Performance regression seen with disperse volume when compared with 3.1.3
  • #1409191: Sequential and Random Writes are off target by 12% and 22% respectively on EC backed volumes over FUSE
  • #1410425: [GNFS+EC] Cthon failures/issues with Lock/Special Test cases on disperse volume with GNFS mount
  • #1410701: [SAMBA-SSL] Volume Share hungs when multiple mount & unmount is performed over a windows client on a SSL enabled cluster
  • #1411228: remove-brick status shows 0 rebalanced files
  • #1411334: Improve output of "gluster volume status detail"
  • #1412135: rename of the same file from multiple clients with caching enabled may result in duplicate files
  • #1412549: EXPECT_WITHIN is taking too much time even if the result matches with expected value
  • #1413526: glusterfind: After glusterfind pre command execution all temporary files and directories /usr/var/lib/misc/glusterfsd/glusterfind/// should be removed
  • #1413971: Bonnie test suite failed with "Can't open file" error
  • #1414287: repeated operation failed warnings in gluster mount logs with disperse volume
  • #1414346: Quota: After upgrade from 3.7 to higher version , gluster quota list command shows "No quota configured on volume repvol"
  • #1414645: Typo in glusterfs code comments
  • #1414782: Add logs to selfheal code path to be helpful for debug
  • #1414902: packaging: python/python2(/python3) cleanup
  • #1415115: client process crashed due to write behind translator
  • #1415590: removing old tier commands under the rebalance commands
  • #1415761: [Remove-brick] Hardlink migration fails with "lookup failed (No such file or directory)" error messages in rebalance logs
  • #1416251: [SNAPSHOT] With all USS plugin enable .snaps directory is not visible in cifs mount as well as windows mount
  • #1416520: Missing FOPs in the io-stats xlator
  • #1416689: Fix spurious failure of ec-background-heal.t
  • #1416889: Simplify refcount API for free'ing function
  • #1417050: [Stress] : SHD Logs flooded with "Heal Failed" messages,filling up "/" quickly
  • #1417466: Prevent reverse heal from happening
  • #1417522: Automatic split brain resolution must check for all the bricks to be up to avoiding serving of inconsistent data(visible on x3 or more)
  • #1417540: Mark tests/bitrot/bug-1373520.t bad
  • #1417588: glusterd is setting replicate volume property over disperse volume or vice versa
  • #1417913: Hangs on 32 bit systems since 3.9.0
  • #1418014: disable client.io-threads on replica volume creation
  • #1418095: Portmap allocates way too much memory (256KB) on stack
  • #1418213: [Ganesha+SSL] : Bonnie++ hangs during rewrites.
  • #1418249: [RFE] Need to have group cli option to set all md-cache options using a single command
  • #1418259: Quota: After deleting directory from mount point on which quota was configured, quota list command output is blank
  • #1418417: packaging: remove glusterfs-ganesha subpackage
  • #1418629: glustershd process crashed on systemic setup
  • #1418900: [RFE] Include few more options in virt file
  • #1418973: removing warning related to enum, to let the build take place without errors for 3.10
  • #1420166: The rebal-throttle setting does not work as expected
  • #1420202: glusterd is crashed at the time of stop volume
  • #1420434: Trash feature improperly disabled
  • #1420571: Massive xlator_t leak in graph-switch code
  • #1420611: when server-quorum is enabled, volume get returns 0 value for server-quorum-ratio
  • #1420614: warning messages seen in glusterd logs while setting the volume option
  • #1420619: Entry heal messages in glustershd.log while no entries shown in heal info
  • #1420623: [RHV-RHGS]: Application VM paused after add brick operation and VM didn't comeup after power cycle.
  • #1420637: Modified volume options not synced once offline nodes comes up.
  • #1420697: CLI option "--timeout" is accepting non numeric and negative values.
  • #1420713: glusterd: storhaug, remove all vestiges ganesha
  • #1421023: Binary file gf_attach generated during build process should be git ignored
  • #1421590: Bricks take up new ports upon volume restart after add-brick op with brick mux enabled
  • #1421600: Test files clean up for tier during 3.10
  • #1421607: Getting error messages in glusterd.log when peer detach is done
  • #1421653: dht_setxattr returns EINVAL when a file is deleted during the FOP
  • #1421721: volume start command hangs
  • #1421724: glusterd log is flooded with stale disconnect rpc messages
  • #1421759: Gluster NFS server crashing in __mnt3svc_umountall
  • #1421937: [Replicate] "RPC call decoding failed" leading to IO hang & mount inaccessible
  • #1421938: systemic testing: seeing lot of ping time outs which would lead to splitbrains
  • #1421955: Disperse: Fallback to pre-compiled code execution when dynamic code generation fails
  • #1422074: GlusterFS truncates nanoseconds to microseconds when setting mtime
  • #1422152: Bricks not coming up when ran with address sanitizer
  • #1422624: Need to improve remove-brick failure message when the brick process is down.
  • #1422760: [Geo-rep] Recreating geo-rep session with same slave after deleting with reset-sync-time fails to sync
  • #1422776: multiple glusterfsd process crashed making the complete subvolume unavailable
  • #1423369: unnecessary logging in rda_opendir
  • #1423373: Crash in index xlator because of race in inode_ctx_set and inode_ref
  • #1423410: Mount of older client fails
  • #1423413: Self-heal fail an WORMed-Files
  • #1423448: glusterfs-fuse RPM now depends on gfapi
  • #1424764: Coverty scan return false positive regarding crypto
  • #1424791: Coverty scan detect a potential free on uninitialised pointer in error code path
  • #1424793: Missing verification of fcntl return code
  • #1424796: Remove deadcode found by coverty in glusterd-utils.c
  • #1424802: Missing call to va_end in xlators/cluster/dht/src/dht-common.c
  • #1424809: Fix another coverty error for useless goto
  • #1424815: Fix erronous comparaison of flags resulting in UUID always sent
  • #1424894: Some switches don't have breaks causing unintended fall throughs.
  • #1424905: Coverity: Memory issues and dead code
  • #1425288: glusterd is not validating for allowed values while setting "cluster.brick-multiplex" property
  • #1425515: tests: quota-anon-fd-nfs.t needs to check if nfs mount is avialable before mounting
  • #1425623: Free all xlator specific resources when xlator->fini() gets called
  • #1425676: gfids are not populated in release/releasedir requests
  • #1425703: [Disperse] Metadata version is not healing when a brick is down
  • #1425743: Tier ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
  • #1426032: Log message shows error code as success even when rpc fails to connect
  • #1426052: ‘state’ set but not used error when readline and/or ncurses is not installed
  • #1426059: gluster fuse client losing connection to gluster volume frequently
  • #1426125: Add logs to identify whether disconnects are voluntary or due to network problems
  • #1426509: include volume name in rebalance stage error log
  • #1426667: [GSS] NFS Sub-directory mount not working on solaris10 client
  • #1426891: script to resolve function name and line number from backtrace
  • #1426948: [RFE] capture portmap details in glusterd's statedump
  • #1427012: Disconnects in nfs mount leads to IO hang and mount inaccessible
  • #1427018: [RFE] - Need a way to reduce the logging of messages "Peer CN" and "SSL verification suceeded messages" in glusterd.log file
  • #1427404: Move tests/bitrot/bug-1373520.t to bad tests and fix the underlying issue in posix
  • #1428036: Update rfc.sh to check/request issue # when a commit is an “rfc”
  • #1428047: Require a Jenkins job to validate Change-ID on commits to branches in glusterfs repository
  • #1428055: dht/rebalance: Increase maximum read block size from 128 KB to 1 MB
  • #1428058: tests: Fix tests/bugs/distribute/bug-1161311.t
  • #1428064: nfs: Check for null buf, and set op_errno to EIO not 0
  • #1428068: nfs: Tear down transports for requests that arrive before the volume is initialized
  • #1428073: nfs: Fix compiler warning when calling svc_getcaller
  • #1428093: protocol/server: Fix crash bug in unlink flow
  • #1428510: memory leak in features/locks xlator
  • #1429198: Restore atime/mtime for symlinks and other non-regular files.
  • #1429200: disallow increasing replica count for arbiter volumes
  • #1429330: [crawler]: auxiliary mount remains even after crawler finishes
  • #1429696: ldd libgfxdr.so.0.0.1: undefined symbol: __gf_free
  • #1430042: Transport endpoint not connected error seen on client when glusterd is restarted
  • #1430148: USS is broken when multiplexing is on
  • #1430608: [RFE] Pass slave volume in geo-rep as read-only
  • #1430719: gfid split brains not getting resolved with automatic splitbrain resolution
  • #1430841: build/packaging: Debian and Ubuntu don't have /usr/libexec/; results in bad packages
  • #1430860: brick process crashes when glusterd is restarted
  • #1431183: [RFE] Gluster get state command should provide connected client related information
  • #1431192: [RFE] Gluster get state command should provide volume and cluster utilization related information
  • #1431908: Enabling parallel-readdir causes dht linkto files to be visible on the mount,
  • #1431963: Warn CLI while creating replica 2 volumes
  • #1432542: Glusterd crashes when restarted with many volumes
  • #1433405: GF_REF_PUT() should return 0 when the structure becomes invalid
  • #1433425: Unrecognized filesystems (i.e. btrfs, zfs) log many errors about "getinode size"
  • #1433506: [Geo-rep] Master and slave mounts are not accessible to take client profile info
  • #1433571: Undo pending xattrs only on the up bricks
  • #1433578: glusterd crashes when peering an IP where the address is more than acceptable range (>255) OR with random hostnames
  • #1433815: auth failure after upgrade to GlusterFS 3.10
  • #1433838: Move spit-brain msg in read txn to debug
  • #1434018: [geo-rep]: Worker crashes with [Errno 16] Device or resource busy: '.gfid/00000000-0000-0000-0000-000000000001/dir.166 while renaming directories
  • #1434062: synclocks don't work correctly under contention
  • #1434274: BZ for some bugs found while going through synctask code
  • #1435943: When parallel readdir is enabled and there are simultaneous readdir and disconnects, then it results in crash
  • #1436086: Parallel readdir on Gluster NFS displays less number of dentries
  • #1436090: When parallel readdir is enabled, linked to file resolution fails
  • #1436739: Sharding: Fix a performance bug
  • #1436936: parameter state->size is wrong in server3_3_writev
  • #1437037: Standardize atomic increment/decrement calling conventions
  • #1437494: Brick Multiplexing:Volume status still shows the PID even after killing the process
  • #1437748: Spacing issue in fix-layout status output
  • #1437780: don't send lookup in fuse_getattr()
  • #1437853: Spellcheck issues reported during Debian build
  • #1438255: Don't wind post-op on a brick where the fop phase failed.
  • #1438370: rebalance: Allow admin to change thread count for rebalance
  • #1438411: [Ganesha + EC] : Input/Output Error while creating LOTS of smallfiles
  • #1438738: Inode ref leak on anonymous reads and writes
  • #1438772: build: clang/llvm has builtin_ffs() and builtin_popcount()
  • #1438810: File-level WORM allows ftruncate() on read-only files
  • #1438858: explicitly specify executor to be bash for tests
  • #1439527: [disperse] Don't count healing brick as healthy brick
  • #1439571: dht/rebalance: Improve rebalance crawl performance
  • #1439640: [Parallel Readdir] : No bound-checks/CLI validation for parallel readdir tunables
  • #1440051: Application VMs with their disk images on sharded-replica 3 volume are unable to boot after performing rebalance
  • #1441035: remove bug-1421590-brick-mux-reuse-ports.t
  • #1441106: [Geo-rep]: Unnecessary unlink call while processing rmdir
  • #1441491: The data-self-heal option is not honored in EC
  • #1441508: dht/cluster: rebalance/remove-brick should honor min-free-disk
  • #1441910: gluster volume stop hangs
  • #1441945: [Eventing]: Unrelated error message displayed when path specified during a 'webhook-test/add' is missing a schema
  • #1442145: split-brain-favorite-child-policy.t depends on "bc"
  • #1442411: meta xlator leaks memory when unloaded
  • #1442569: Implement Negative lookup cache feature to improve create performance
  • #1442724: rm -rf returns ENOTEMPTY even though ls on the mount point returns no files
  • #1442760: snapshot: snapshots appear to be failing with respect to secure geo-rep slave
  • #1443373: mkdir/rmdir loop causes gfid-mismatch on a 6 brick distribute volume
  • #1443896: [BrickMultiplex] gluster command not responding and .snaps directory is not visible after executing snapshot related command
  • #1443959: packaging: no firewalld-filesystem before el 7.3
  • #1443977: Unable to take snapshot on a geo-replicated volume, even after stopping the session
  • #1444023: io-stats xlator leaks memory when fini() is called
  • #1444228: Autoconf leaves unexpanded variables in path names of non-shell-script text files
  • #1444941: bogus date in %changelog
  • #1445569: Provide a correct way to save the statedump generated by gfapi application
  • #1445590: Incorrect and redundant logs in the DHT rmdir code path
  • #1446126: S30samba-start.sh throws 'unary operator expected' warning during independent execution
  • #1446273: Some functions are exported incorrectly for Mac OS X with the GFAPI_PUBLIC macro
  • #1447543: Revert experimental and 4.0 features to prepare for 3.11 release
  • #1447571: RFE: Enhance handleops readdirplus operation to return handles along with dirents
  • #1447597: RFE : SELinux translator to support setting SELinux contexts on files in a glusterfs volume
  • #1447604: volume set fails if nfs.so is not installed
  • #1447607: Don't allow rebalance/fix-layout operation on sharding enabled volumes till dht+sharding bugs are fixed
  • #1448345: Segmentation fault when creating a qcow2 with qemu-img
  • #1448416: Halo Replication feature for AFR translator
  • #1449004: [Brick Multiplexing] : Bricks for multiple volumes going down after glusterd restart and not coming back up after volume start force
  • #1449191: Multiple bricks WILL crash after TCP port probing
  • #1449311: [whql][virtio-block+glusterfs]"Disk Stress" and "Disk Verification" job always failed on win7-32/win2012/win2k8R2 guest
  • #1449775: quota: limit-usage command failed with error " Failed to start aux mount"
  • #1449921: afr: include quorum type and count when dumping afr priv
  • #1449924: When either killing or restarting a brick with performance.stat-prefetch on, stat sometimes returns a bad st_size value.
  • #1449933: Brick Multiplexing :- resetting a brick bring down other bricks with same PID
  • #1450267: nl-cache xlator leaks timer wheel and other memory
  • #1450377: GNFS crashed while taking lock on a file from 2 different clients having same volume mounted from 2 different servers
  • #1450565: glfsheal: crashed(segfault) with disperse volume in RDMA
  • #1450729: Brick Multiplexing: seeing Input/Output Error for .trashcan
  • #1450933: [New] - Replacing an arbiter brick while I/O happens causes vm pause
  • #1451033: contrib: timer-wheel 32-bit bug, use builtin_fls, license, etc
  • #1451573: AFR returns the node uuid of the same node for every file in the replica
  • #1451586: crash in dht_rmdir_do
  • #1451591: cli xml status of detach tier broken
  • #1451887: Add tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t to bad tests
  • #1452000: Spacing issue in fix-layout status output
  • #1453050: [DHt] : segfault in dht_selfheal_dir_setattr while running regressions
  • #1453086: Brick Multiplexing: On reboot of a node Brick multiplexing feature lost on that node as multiple brick processes get spawned
  • #1453152: [Parallel Readdir] : Mounts fail when performance.parallel-readdir is set to "off"
  • #1454533: lock_revocation.t Marked as bad in 3.11 for CentOS as well
  • #1454569: [geo-rep + nl]: Multiple crashes observed on slave with "nlc_lookup_cbk"
  • #1454597: [Tiering]: High and low watermark values when set to the same level, is allowed
  • #1454612: glusterd on a node crashed after running volume profile command
  • #1454686: Implement FALLOCATE FOP for EC
  • #1454853: Seeing error "Failed to get the total number of files. Unable to estimate time to complete rebalance" in rebalance logs
  • #1455177: ignore incorrect uuid validation in gd_validate_mgmt_hndsk_req
  • #1455423: dht: dht self heal fails with no hashed subvol error
  • #1455907: heal info shows the status of the bricks as "Transport endpoint is not connected" though bricks are up
  • #1456224: [gluster-block]:Need a volume group profile option for gluster-block volume to add necessary options to be added.
  • #1456225: gluster-block is not working as expected when shard is enabled
  • #1456331: [Bitrot]: Brick process crash observed while trying to recover a bad file in disperse volume