Release notes for Gluster 3.11.2
Major changes, features and limitations addressed in this release
There are no major features or changes made in this release.
- Expanding a gluster volume that is sharded may cause file corruption
- Sharded volumes are typically used for VM images, if such volumes are expanded or possibly contracted (i.e add/remove bricks and rebalance) there are reports of VM images getting corrupted.
- The last known cause for corruption (Bug #1465123) has a fix with this release. As further testing is still in progress, the issue is retained as a major issue.
- Status of this bug can be tracked here, #1465123
Bugs addressed since release-3.11.0 are listed below.
- #1463512: USS: stale snap entries are seen when activation/deactivation performed during one of the glusterd's unavailability
- #1463513: [geo-rep]: extended attributes are not synced if the entry and extended attributes are done within changelog roleover/or entry sync
- #1463517: Brick Multiplexing:dmesg shows request_sock_TCP: Possible SYN flooding on port 49152 and memory related backtraces
- #1463528: [Perf] 35% drop in small file creates on smbv3 on *2
- #1463626: [Ganesha]Bricks got crashed while running posix compliance test suit on V4 mount
- #1464316: DHT: Pass errno as an argument to gf_msg
- #1465123: Fd based fops fail with EBADF on file migration
- #1465854: Regression: Heal info takes longer time when a brick is down
- #1466801: assorted typos and spelling mistakes from Debian lintian
- #1466859: dht_rename_lock_cbk crashes in upstream regression test
- #1467268: Heal info shows incorrect status
- #1468118: disperse seek does not correctly handle the end of file
- #1468200: [Geo-rep]: entry failed to sync to slave with ENOENT errror
- #1468457: selfheal deamon cpu consumption not reducing when IOs are going on and all redundant bricks are brought down one after another
- #1469459: Rebalance hangs on remove-brick if the target volume changes
- #1470938: Regression: non-disruptive(in-service) upgrade on EC volume fails
- #1471025: glusterfs process leaking memory when error occurs
- #1471611: metadata heal not happening despite having an active sink
- #1471869: cthon04 can cause segfault in gNFS/NLM
- #1472794: Test script failing with brick multiplexing enabled