NATS - v2.8.0

Security

Changelog

Go Version

  • 1.17.9: Both release executables and Docker images are built with this Go release.

Added:

  • LeafNode:
    • Support for a min_version in the leafnodes{} that would reject servers with a lower version. Note that this would work only for servers that are v2.8.0 and above (#3013)
  • Monitoring:
    • Server version in monitoring landing page (#2928)
    • Logging to /healthz endpoint when failure occurs. Thanks to @samuel-form3 for the contribution (#2976)
    • MQTT and Websocket blocks in the /varz endpoint (#2996)
  • JetStream:
    • Consumer check added to healthz endpoint (#2927)
    • Max stream bytes checks (#2970)
    • Ability to limit a consumer's MaxAckPending value (#2982)
    • Allow streams and consumers to migrate between clusters. This feature is considered "beta" (#3001, #3036, #3041, #3043, #3047)
    • New unique_tag option in jetstream{} configuration block to prevent placing a stream in the same availability zone twice (#3011)
    • Stream Alternates field in StreamInfo response. They provide a priority list of mirrors and the source in relation to where the request originated (#3023)
  • Deterministic subject tokens to partition mapping (#2890)

Changed:

  • Gateway:
    • Duplicate server names are now detected across a super-cluster. Server names ought to be unique, which is critical when JetStream is used (#2923)
  • JetStream:
    • Processing of consumers acknowledgments are now done outside of the socket read go routine. This helps reducing occurrences of "Readloop" warnings for route connections when there was a lot of pending acknowledgements to process per consumer. The redeliveries are also possibly postponed (by chunk of 100ms) to favor processing of acknowledgements (#2898)
    • Lower default consumer's "Maximum Ack Pending" from 20,000 to 1,000. This affects only applications/NATS CLI that do not set an explicit value. If you notice a performance degradation and your system can handle very well a higher value, then the parameter should be explicitly configured to a higher value when creating the consumer/subscription (#2972)
  • Duplicate user names in authorization{} and accounts{} blocks are now detected and will fail the start of the server. Thanks to @smlx for the report (#2943)

Improved

  • Configuration:
    • Skip exact duplicate URLS for routes, gateways or leaf nodes. Duplicate route URLs could cause issues for JetStream cases since it may prevent electing a leader (#2930)
  • Logging:
    • Limiting rate of some identical warnings (#2994)
  • JetStream:
    • Behavior of "list" and "delete" operations for offline streams has been improved (responses no longer hanging or failing) (#2925)
    • When a consumer had a backoff duration list, the server could check for redeliveries more frequently than it should have. The redelivery timing was still honored though (#2948)
    • Ensures the cluster information in /jsz monitoring endpoint is sent from the leader only (#2932, #2983, #2984)
    • Memory pooling and management (#2960)
    • Consumer snapshot logic and disk usage in clustered mode. Thanks to @phho for the report (#2973)
    • Performance of ordered consumers (and stream catchup) with longer RTTs (#2975)
    • Performance for streams containing multiple subjects and consumer with a filter. Thanks to @samuel-form3 for the report (#3008)
    • Reduction of unnecessary leader elections (#3035)
    • On recovery, the server will now print the filenames for which bad checksums were detected (#3053)

Fixed

  • JetStream:
    • Consumer state handling, for instance a consumer with "DeliverNew" deliver policy could receive old messages after a server restart in some cases (#2927)
    • Removal of an external source stream was not working properly. Thanks to @kylebernhardy for the report (#2938)
    • Possible panic on leadership change notices (#2939)
    • Possible deadlock during consumer leadership change. Thanks to @wchajl for the report (#2951)
    • Scaling up a stream was not replicating existing messages (#2958)
    • Heavy contention in file store could result in underflow and panic. Thanks to @whynowy for the report (#2959)
    • Consumer sample frequency not updated during a consumer update. Thanks to @boris-ilijic for the report and contribution (#2966)
    • Some limit issues on update (#2945)
    • Memory based replicated consumers could possibly stop working after snapshots and server restart. The $SYS folder could also being seen as growing in size. Thanks to @phho and @MilkyWay-core for the reports (#2973)
    • Possible panic (due to data races) when processing pull consumer requests. Thanks to @phho for the report (#2977)
    • Account stream imports were not removed on configuration reload. Thanks to @alirezaDavid for the report (#2978)
    • Sealed streams would not recover on server restart (#2991)
    • Possible panic on server shutdown trying to migrate ephemeral consumers (#2999)
    • A "next message" request for a pull consumer, when going over a gateway but down to a Leafnode could fail (#3016)
    • Consumer deliver subject incorrect when imported and crossing route or gateway (#3017, #3025)
    • RAFT layer for stability and leader election (#3020)
    • Memory stream potentially delivering duplicates during a node restart. Thanks to @aksdb for the report (#3020)
    • A stream could become leader when it should not, causing messages to be lost (#3029)
    • A stream catchup could stall because the server sending data could fail during the process but still send an indication that the other server that catchup did complete (#3029, #3040)
    • Route could be blocked when processing an API call while an asset was recovering a big file (#3035)
    • Assets (streams or consumers) state could be removed if they had been recreated after being initially removed (#3039)
    • When running on mixed-mode, a JetStream export could be removed on JWT update (#3044)
    • Possible panic on cluster step-down of a consumer (#3045)
    • Some limit enforcement issues and prevent loops in cyclic source stream configurations (#3046)
    • Some stream source issues, including missing messages and possible stall (#3052)
    • On configuration reload, JetStream would be disabled if it was enabled only from the command line options. Thanks to @xieyuschen for the contribution (#3050)
  • Leafnode:
    • Interest propagation issue when crossing accounts and the leaf connection is recreated. This could also manifest with JetStream since internally there are subscriptions that do cross accounts. Thanks to @LLLLimbo and @JH7 for the report (#3031)
  • Monitoring:
    • reserved_memory and/or reserved_storage in the jetstream{} block of the /varz endpoint could show incorrect huge number due to a unint64 underflow (#2907)
    • verify_and_map in the tls{} block would prevent inspecting the monitoring page when using the secure https port. Thanks to @rsoberano-ld for the report (#2981)
    • Possible deadlock when inspecting the /jsz endpoint (#3029)
  • Miscellaneous:
    • Client connection occasionally incorrectly assigned to the global account. This happened when the configuration was incorrectly referencing the same user in authorization{} and accounts{}. Thanks to @smlx for the report (#2943)
    • The NATS account resolver, while synchronizing all JWTs, would not validate the nkey(s) or jwt(s) received via the system account (CVE-2022-28357) (#2985)
    • Reject messages from application that have an invalid reply subject (contains the $JS.ACK prefix) (#3026)
    • Allow server to run as system user on Windows. Thanks to @LaurensVergote for the contribution (#3022)

Complete Changes

https://github.com/nats-io/nats-server/compare/v2.7.4...v2.8.0


Details

date
April 18, 2022, 10:35 p.m.
name
Release v2.8.0
type
Minor
👇
Register or login to:
  • 🔍View and search all NATS releases.
  • 🛠️Create and share lists to track your tools.
  • 🚨Setup notifications for major, security, feature or patch updates.
  • 🚀Much more coming soon!
Continue with GitHub
Continue with Google
or