Skip to content

3.5.1

Release Notes for GlusterFS 3.5.1

This is mostly a bugfix release. The Release Notes for 3.5.0 contain a listing of all the new features that were added.

There are two notable changes that are not only bug fixes, or documentation additions:

  1. a new volume option server.manage-gids has been added This option should be used when users of a volume are in more than approximately 93 groups (Bug 1096425)
  2. Duplicate Request Cache for NFS has now been disabled by default, this may reduce performance for certain workloads, but improves the overall stability and memory footprint for most users

Bugs Fixed:

  • 765202: lgetxattr called with invalid keys on the bricks
  • 833586: inodelk hang from marker_rename_release_newp_lock
  • 859581: self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
  • 986429: Backupvolfile server option should work internal to GlusterFS framework
  • 1039544: [FEAT] "gluster volume heal info" should list the entries that actually required to be healed.
  • 1046624: Unable to heal symbolic Links
  • 1046853: AFR : For every file self-heal there are warning messages reported in glustershd.log file
  • 1063190: Volume was not accessible after server side quorum was met
  • 1064096: The old Python Translator code (not Glupy) should be removed
  • 1066996: Using sanlock on a gluster mount with replica 3 (quorum-type auto) leads to a split-brain
  • 1071191: [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created with open(), seek(), write()
  • 1078061: Need ability to heal mismatching user extended attributes without any changelogs
  • 1078365: New xlators are linked as versioned .so files, creating .so.0.0.0
  • 1086743: Add documentation for the Feature: RDMA-connection manager (RDMA-CM)
  • 1086748: Add documentation for the Feature: AFR CLI enhancements
  • 1086749: Add documentation for the Feature: Exposing Volume Capabilities
  • 1086750: Add documentation for the Feature: File Snapshots in GlusterFS
  • 1086751: Add documentation for the Feature: gfid-access
  • 1086752: Add documentation for the Feature: On-Wire Compression/Decompression
  • 1086754: Add documentation for the Feature: Quota Scalability
  • 1086755: Add documentation for the Feature: readdir-ahead
  • 1086756: Add documentation for the Feature: zerofill API for GlusterFS
  • 1086758: Add documentation for the Feature: Changelog based parallel geo-replication
  • 1086760: Add documentation for the Feature: Write Once Read Many (WORM) volume
  • 1086762: Add documentation for the Feature: BD Xlator - Block Device translator
  • 1086766: Add documentation for the Feature: Libgfapi
  • 1086774: Add documentation for the Feature: Access Control List - Version 3 support for Gluster NFS
  • 1086781: Add documentation for the Feature: Eager locking
  • 1086782: Add documentation for the Feature: glusterfs and oVirt integration
  • 1086783: Add documentation for the Feature: qemu 1.3 - libgfapi integration
  • 1088848: Spelling errors in rpc/rpc-transport/rdma/src/rdma.c
  • 1089054: gf-error-codes.h is missing from source tarball
  • 1089470: SMB: Crash on brick process during compile kernel.
  • 1089934: list dir with more than N files results in Input/output error
  • 1091340: Doc: Add glfs_fini known issue to release notes 3.5
  • 1091392: glusterfs.spec.in: minor/nit changes to sync with Fedora spec
  • 1095256: Excessive logging from self-heal daemon, and bricks
  • 1095595: Stick to IANA standard while allocating brick ports
  • 1095775: Add support in libgfapi to fetch volume info from glusterd.
  • 1095971: Stopping/Starting a Gluster volume resets ownership
  • 1096040: AFR : self-heal-daemon not clearing the change-logs of all the sources after self-heal
  • 1096425: i/o error when one user tries to access RHS volume over NFS with 100+ GIDs
  • 1099878: Need support for handle based Ops to fetch/modify extended attributes of a file
  • 1101647: gluster volume heal volname statistics heal-count not giving desired output.
  • 1102306: license: xlators/features/glupy dual license GPLv2 and LGPLv3+
  • 1103413: Failure in gf_log_init reopening stderr
  • 1104592: heal info may give Success instead of transport end point not connected when a brick is down.
  • 1104915: glusterfsd crashes while doing stress tests
  • 1104919: Fix memory leaks in gfid-access xlator.
  • 1104959: Dist-geo-rep : some of the files not accessible on slave after the geo-rep sync from master to slave.
  • 1105188: Two instances each, of brick processes, glusterfs-nfs and quotad seen after glusterd restart
  • 1105524: Disable nfs.drc by default
  • 1107937: quota-anon-fd-nfs.t fails spuriously
  • 1109832: I/O fails for for glusterfs 3.4 AFR clients accessing servers upgraded to glusterfs 3.5
  • 1110777: glusterfsd OOM - using all memory when quota is enabled

Known Issues:

  • The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:

    1. gluster volume set server.allow-insecure on
    2. restarting the volume is necessary

      gluster volume stop <volname>
      gluster volume start <volname>
      
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:

      option rpc-auth-allow-insecure on
      
    4. restarting glusterd is necessary

      service glusterd restart
      

More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.

  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.

  • libgfapi clients calling glfs_fini before a successfull glfs_init will cause the client to hang has been reported by QEMU developers. The workaround is NOT to call glfs_fini for error cases encountered before a successfull glfs_init. Follow Bug 1091335 to get informed when a release is made available that contains a final fix.

  • After enabling server.manage-gids, the volume needs to be stopped and started again to have the option enabled in the brick processes

    gluster volume stop <volname>
    gluster volume start <volname>