[Vendor] Update directly used dependencys (#15593)

* update github.com/blevesearch/bleve v2.0.2 -> v2.0.3

* github.com/denisenkom/go-mssqldb v0.9.0 -> v0.10.0

* github.com/editorconfig/editorconfig-core-go v2.4.1 -> v2.4.2

* github.com/go-chi/cors v1.1.1 -> v1.2.0

* github.com/go-git/go-billy v5.0.0 -> v5.1.0

* github.com/go-git/go-git v5.2.0 -> v5.3.0

* github.com/go-ldap/ldap v3.2.4 -> v3.3.0

* github.com/go-redis/redis v8.6.0 -> v8.8.2

* github.com/go-sql-driver/mysql v1.5.0 -> v1.6.0

* github.com/go-swagger/go-swagger v0.26.1 -> v0.27.0

* github.com/lib/pq v1.9.0 -> v1.10.1

* github.com/mattn/go-sqlite3 v1.14.6 -> v1.14.7

* github.com/go-testfixtures/testfixtures v3.5.0 -> v3.6.0

* github.com/issue9/identicon v1.0.1 -> v1.2.0

* github.com/klauspost/compress v1.11.8 -> v1.12.1

* github.com/mgechev/revive v1.0.3 -> v1.0.6

* github.com/microcosm-cc/bluemonday v1.0.7 -> v1.0.8

* github.com/niklasfasching/go-org v1.4.0 -> v1.5.0

* github.com/olivere/elastic v7.0.22 -> v7.0.24

* github.com/pelletier/go-toml v1.8.1 -> v1.9.0

* github.com/prometheus/client_golang v1.9.0 -> v1.10.0

* github.com/xanzy/go-gitlab v0.44.0 -> v0.48.0

* github.com/yuin/goldmark v1.3.3 -> v1.3.5

* github.com/6543/go-version v1.2.4 -> v1.3.1

* do github.com/lib/pq v1.10.0 -> v1.10.1 again ...
This commit is contained in:
6543 2021-04-23 02:08:53 +02:00 committed by GitHub
parent 834fc74873
commit 792b4dba2c
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
558 changed files with 32080 additions and 24669 deletions

View file

@ -22,3 +22,5 @@ linters:
- exhaustivestruct
- wrapcheck
- errorlint
- cyclop
- forcetypeassert

View file

@ -1,20 +0,0 @@
dist: xenial
language: go
services:
- redis-server
go:
- 1.14.x
- 1.15.x
- tip
matrix:
allow_failures:
- go: tip
go_import_path: github.com/go-redis/redis
before_install:
- curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s --
-b $(go env GOPATH)/bin v1.32.2

View file

@ -1,5 +1,120 @@
# Changelog
> :heart: [**Uptrace.dev** - distributed traces, logs, and errors in one place](https://uptrace.dev)
> :heart:
> [**Uptrace.dev** - All-in-one tool to optimize performance and monitor errors & logs](https://uptrace.dev)
See https://redis.uptrace.dev/changelog/
## v8.8
- To make updating easier, extra modules now have the same version as go-redis does. That means that
you need to update your imports:
```
github.com/go-redis/redis/extra/redisotel -> github.com/go-redis/redis/extra/redisotel/v8
github.com/go-redis/redis/extra/rediscensus -> github.com/go-redis/redis/extra/rediscensus/v8
```
## v8.5
- [knadh](https://github.com/knadh) contributed long-awaited ability to scan Redis Hash into a
struct:
```go
err := rdb.HGetAll(ctx, "hash").Scan(&data)
err := rdb.MGet(ctx, "key1", "key2").Scan(&data)
```
- Please check [redismock](https://github.com/go-redis/redismock) by
[monkey92t](https://github.com/monkey92t) if you are looking for mocking Redis Client.
## v8
- All commands require `context.Context` as a first argument, e.g. `rdb.Ping(ctx)`. If you are not
using `context.Context` yet, the simplest option is to define global package variable
`var ctx = context.TODO()` and use it when `ctx` is required.
- Full support for `context.Context` canceling.
- Added `redis.NewFailoverClusterClient` that supports routing read-only commands to a slave node.
- Added `redisext.OpenTemetryHook` that adds
[Redis OpenTelemetry instrumentation](https://redis.uptrace.dev/tracing/).
- Redis slow log support.
- Ring uses Rendezvous Hashing by default which provides better distribution. You need to move
existing keys to a new location or keys will be inaccessible / lost. To use old hashing scheme:
```go
import "github.com/golang/groupcache/consistenthash"
ring := redis.NewRing(&redis.RingOptions{
NewConsistentHash: func() {
return consistenthash.New(100, crc32.ChecksumIEEE)
},
})
```
- `ClusterOptions.MaxRedirects` default value is changed from 8 to 3.
- `Options.MaxRetries` default value is changed from 0 to 3.
- `Cluster.ForEachNode` is renamed to `ForEachShard` for consistency with `Ring`.
## v7.3
- New option `Options.Username` which causes client to use `AuthACL`. Be aware if your connection
URL contains username.
## v7.2
- Existing `HMSet` is renamed to `HSet` and old deprecated `HMSet` is restored for Redis 3 users.
## v7.1
- Existing `Cmd.String` is renamed to `Cmd.Text`. New `Cmd.String` implements `fmt.Stringer`
interface.
## v7
- _Important_. Tx.Pipeline now returns a non-transactional pipeline. Use Tx.TxPipeline for a
transactional pipeline.
- WrapProcess is replaced with more convenient AddHook that has access to context.Context.
- WithContext now can not be used to create a shallow copy of the client.
- New methods ProcessContext, DoContext, and ExecContext.
- Client respects Context.Deadline when setting net.Conn deadline.
- Client listens on Context.Done while waiting for a connection from the pool and returns an error
when context context is cancelled.
- Add PubSub.ChannelWithSubscriptions that sends `*Subscription` in addition to `*Message` to allow
detecting reconnections.
- `time.Time` is now marshalled in RFC3339 format. `rdb.Get("foo").Time()` helper is added to parse
the time.
- `SetLimiter` is removed and added `Options.Limiter` instead.
- `HMSet` is deprecated as of Redis v4.
## v6.15
- Cluster and Ring pipelines process commands for each node in its own goroutine.
## 6.14
- Added Options.MinIdleConns.
- Added Options.MaxConnAge.
- PoolStats.FreeConns is renamed to PoolStats.IdleConns.
- Add Client.Do to simplify creating custom commands.
- Add Cmd.String, Cmd.Int, Cmd.Int64, Cmd.Uint64, Cmd.Float64, and Cmd.Bool helpers.
- Lower memory usage.
## v6.13
- Ring got new options called `HashReplicas` and `Hash`. It is recommended to set
`HashReplicas = 1000` for better keys distribution between shards.
- Cluster client was optimized to use much less memory when reloading cluster state.
- PubSub.ReceiveMessage is re-worked to not use ReceiveTimeout so it does not lose data when timeout
occurres. In most cases it is recommended to use PubSub.Channel instead.
- Dialer.KeepAlive is set to 5 minutes by default.
## v6.12
- ClusterClient got new option called `ClusterSlots` which allows to build cluster of normal Redis
Servers that don't have cluster mode enabled. See
https://godoc.org/github.com/go-redis/redis#example-NewClusterClient--ManualSetup

View file

@ -1,10 +1,9 @@
all: testdeps
test: testdeps
go test ./...
go test ./... -short -race
go test ./... -run=NONE -bench=. -benchmem
env GOOS=linux GOARCH=386 go test ./...
go vet
golangci-lint run
testdeps: testdata/redis/src/redis-server
@ -15,7 +14,7 @@ bench: testdeps
testdata/redis:
mkdir -p $@
wget -qO- https://download.redis.io/releases/redis-6.2-rc3.tar.gz | tar xvz --strip-components=1 -C $@
wget -qO- https://download.redis.io/releases/redis-6.2.1.tar.gz | tar xvz --strip-components=1 -C $@
testdata/redis/src/redis-server: testdata/redis
cd $< && make all

View file

@ -1,12 +1,16 @@
<p align="center">
<a href="https://uptrace.dev/?utm_source=gh-redis&utm_campaign=gh-redis-banner1">
<img src="https://raw.githubusercontent.com/uptrace/roadmap/master/banner1.png" alt="All-in-one tool to optimize performance and monitor errors & logs">
</a>
</p>
# Redis client for Golang
[![Build Status](https://travis-ci.org/go-redis/redis.png?branch=master)](https://travis-ci.org/go-redis/redis)
![build workflow](https://github.com/go-redis/redis/actions/workflows/build.yml/badge.svg)
[![PkgGoDev](https://pkg.go.dev/badge/github.com/go-redis/redis/v8)](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc)
[![Documentation](https://img.shields.io/badge/redis-documentation-informational)](https://redis.uptrace.dev/)
[![Chat](https://discordapp.com/api/guilds/752070105847955518/widget.png)](https://discord.gg/rWtp5Aj)
> :heart: [**Uptrace.dev** - distributed traces, logs, and errors in one place](https://uptrace.dev)
- Join [Discord](https://discord.gg/rWtp5Aj) to ask questions.
- [Documentation](https://redis.uptrace.dev)
- [Reference](https://pkg.go.dev/github.com/go-redis/redis/v8?tab=doc)
@ -129,10 +133,13 @@ vals, err := rdb.Eval(ctx, "return {KEYS[1],ARGV[1]}", []string{"key"}, "hello")
// custom command
res, err := rdb.Do(ctx, "set", "key", "value").Result()
```
## Run the test
go-redis will start a redis-server and run the test cases.
go-redis will start a redis-server and run the test cases.
The paths of redis-server bin file and redis config file are definded in `main_test.go`:
```
var (
redisServerBin, _ = filepath.Abs(filepath.Join("testdata", "redis", "src", "redis-server"))
@ -140,13 +147,16 @@ var (
)
```
For local testing, you can change the variables to refer to your local files, or create a soft link to the corresponding folder for redis-server and copy the config file to `testdata/redis/`:
For local testing, you can change the variables to refer to your local files, or create a soft link
to the corresponding folder for redis-server and copy the config file to `testdata/redis/`:
```
ln -s /usr/bin/redis-server ./go-redis/testdata/redis/src
cp ./go-redis/testdata/redis.conf ./go-redis/testdata/redis/
```
Lastly, run:
```
go test
```

View file

@ -295,8 +295,9 @@ func (c *clusterNodes) Close() error {
func (c *clusterNodes) Addrs() ([]string, error) {
var addrs []string
c.mu.RLock()
closed := c.closed
closed := c.closed //nolint:ifshort
if !closed {
if len(c.activeAddrs) > 0 {
addrs = c.activeAddrs
@ -632,14 +633,14 @@ func (c *clusterStateHolder) Reload(ctx context.Context) (*clusterState, error)
return state, nil
}
func (c *clusterStateHolder) LazyReload(ctx context.Context) {
func (c *clusterStateHolder) LazyReload() {
if !atomic.CompareAndSwapUint32(&c.reloading, 0, 1) {
return
}
go func() {
defer atomic.StoreUint32(&c.reloading, 0)
_, err := c.Reload(ctx)
_, err := c.Reload(context.Background())
if err != nil {
return
}
@ -649,14 +650,15 @@ func (c *clusterStateHolder) LazyReload(ctx context.Context) {
func (c *clusterStateHolder) Get(ctx context.Context) (*clusterState, error) {
v := c.state.Load()
if v != nil {
state := v.(*clusterState)
if time.Since(state.createdAt) > 10*time.Second {
c.LazyReload(ctx)
}
return state, nil
if v == nil {
return c.Reload(ctx)
}
return c.Reload(ctx)
state := v.(*clusterState)
if time.Since(state.createdAt) > 10*time.Second {
c.LazyReload()
}
return state, nil
}
func (c *clusterStateHolder) ReloadOrGet(ctx context.Context) (*clusterState, error) {
@ -732,7 +734,7 @@ func (c *ClusterClient) Options() *ClusterOptions {
// ReloadState reloads cluster state. If available it calls ClusterSlots func
// to get cluster slots information.
func (c *ClusterClient) ReloadState(ctx context.Context) {
c.state.LazyReload(ctx)
c.state.LazyReload()
}
// Close closes the cluster client, releasing any open resources.
@ -793,7 +795,7 @@ func (c *ClusterClient) process(ctx context.Context, cmd Cmder) error {
}
if isReadOnly := isReadOnlyError(lastErr); isReadOnly || lastErr == pool.ErrClosed {
if isReadOnly {
c.state.LazyReload(ctx)
c.state.LazyReload()
}
node = nil
continue
@ -1228,7 +1230,7 @@ func (c *ClusterClient) checkMovedErr(
}
if moved {
c.state.LazyReload(ctx)
c.state.LazyReload()
failedCmds.Add(node, cmd)
return true
}
@ -1414,7 +1416,7 @@ func (c *ClusterClient) cmdsMoved(
}
if moved {
c.state.LazyReload(ctx)
c.state.LazyReload()
for _, cmd := range cmds {
failedCmds.Add(node, cmd)
}
@ -1472,7 +1474,7 @@ func (c *ClusterClient) Watch(ctx context.Context, fn func(*Tx) error, keys ...s
if isReadOnly := isReadOnlyError(err); isReadOnly || err == pool.ErrClosed {
if isReadOnly {
c.state.LazyReload(ctx)
c.state.LazyReload()
}
node, err = c.slotMasterNode(ctx, slot)
if err != nil {

View file

@ -710,6 +710,13 @@ func (cmd *StringCmd) Bytes() ([]byte, error) {
return util.StringToBytes(cmd.val), cmd.err
}
func (cmd *StringCmd) Bool() (bool, error) {
if cmd.err != nil {
return false, cmd.err
}
return strconv.ParseBool(cmd.val)
}
func (cmd *StringCmd) Int() (int, error) {
if cmd.err != nil {
return 0, cmd.err
@ -810,6 +817,55 @@ func (cmd *FloatCmd) readReply(rd *proto.Reader) (err error) {
//------------------------------------------------------------------------------
type FloatSliceCmd struct {
baseCmd
val []float64
}
var _ Cmder = (*FloatSliceCmd)(nil)
func NewFloatSliceCmd(ctx context.Context, args ...interface{}) *FloatSliceCmd {
return &FloatSliceCmd{
baseCmd: baseCmd{
ctx: ctx,
args: args,
},
}
}
func (cmd *FloatSliceCmd) Val() []float64 {
return cmd.val
}
func (cmd *FloatSliceCmd) Result() ([]float64, error) {
return cmd.val, cmd.err
}
func (cmd *FloatSliceCmd) String() string {
return cmdString(cmd, cmd.val)
}
func (cmd *FloatSliceCmd) readReply(rd *proto.Reader) error {
_, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make([]float64, n)
for i := 0; i < len(cmd.val); i++ {
switch num, err := rd.ReadFloatReply(); {
case err == Nil:
cmd.val[i] = 0
case err != nil:
return nil, err
default:
cmd.val[i] = num
}
}
return nil, nil
})
return err
}
//------------------------------------------------------------------------------
type StringSliceCmd struct {
baseCmd

View file

@ -67,6 +67,11 @@ func appendArg(dst []interface{}, arg interface{}) []interface{} {
dst = append(dst, k, v)
}
return dst
case map[string]string:
for k, v := range arg {
dst = append(dst, k, v)
}
return dst
default:
return append(dst, arg)
}
@ -117,6 +122,8 @@ type Cmdable interface {
Get(ctx context.Context, key string) *StringCmd
GetRange(ctx context.Context, key string, start, end int64) *StringCmd
GetSet(ctx context.Context, key string, value interface{}) *StringCmd
GetEx(ctx context.Context, key string, expiration time.Duration) *StringCmd
GetDel(ctx context.Context, key string) *StringCmd
Incr(ctx context.Context, key string) *IntCmd
IncrBy(ctx context.Context, key string, value int64) *IntCmd
IncrByFloat(ctx context.Context, key string, value float64) *FloatCmd
@ -124,6 +131,8 @@ type Cmdable interface {
MSet(ctx context.Context, values ...interface{}) *StatusCmd
MSetNX(ctx context.Context, values ...interface{}) *BoolCmd
Set(ctx context.Context, key string, value interface{}, expiration time.Duration) *StatusCmd
SetArgs(ctx context.Context, key string, value interface{}, a SetArgs) *StatusCmd
// TODO: rename to SetEx
SetEX(ctx context.Context, key string, value interface{}, expiration time.Duration) *StatusCmd
SetNX(ctx context.Context, key string, value interface{}, expiration time.Duration) *BoolCmd
SetXX(ctx context.Context, key string, value interface{}, expiration time.Duration) *BoolCmd
@ -159,6 +168,7 @@ type Cmdable interface {
HMSet(ctx context.Context, key string, values ...interface{}) *BoolCmd
HSetNX(ctx context.Context, key, field string, value interface{}) *BoolCmd
HVals(ctx context.Context, key string) *StringSliceCmd
HRandField(ctx context.Context, key string, count int, withValues bool) *StringSliceCmd
BLPop(ctx context.Context, timeout time.Duration, keys ...string) *StringSliceCmd
BRPop(ctx context.Context, timeout time.Duration, keys ...string) *StringSliceCmd
@ -181,6 +191,7 @@ type Cmdable interface {
RPopLPush(ctx context.Context, source, destination string) *StringCmd
RPush(ctx context.Context, key string, values ...interface{}) *IntCmd
RPushX(ctx context.Context, key string, values ...interface{}) *IntCmd
LMove(ctx context.Context, source, destination, srcpos, destpos string) *StringCmd
SAdd(ctx context.Context, key string, members ...interface{}) *IntCmd
SCard(ctx context.Context, key string) *IntCmd
@ -224,6 +235,7 @@ type Cmdable interface {
XTrimApprox(ctx context.Context, key string, maxLen int64) *IntCmd
XInfoGroups(ctx context.Context, key string) *XInfoGroupsCmd
XInfoStream(ctx context.Context, key string) *XInfoStreamCmd
XInfoConsumers(ctx context.Context, key string, group string) *XInfoConsumersCmd
BZPopMax(ctx context.Context, timeout time.Duration, keys ...string) *ZWithKeyCmd
BZPopMin(ctx context.Context, timeout time.Duration, keys ...string) *ZWithKeyCmd
@ -241,6 +253,7 @@ type Cmdable interface {
ZLexCount(ctx context.Context, key, min, max string) *IntCmd
ZIncrBy(ctx context.Context, key string, increment float64, member string) *FloatCmd
ZInterStore(ctx context.Context, destination string, store *ZStore) *IntCmd
ZMScore(ctx context.Context, key string, members ...string) *FloatSliceCmd
ZPopMax(ctx context.Context, key string, count ...int64) *ZSliceCmd
ZPopMin(ctx context.Context, key string, count ...int64) *ZSliceCmd
ZRange(ctx context.Context, key string, start, stop int64) *StringSliceCmd
@ -261,6 +274,7 @@ type Cmdable interface {
ZRevRank(ctx context.Context, key, member string) *IntCmd
ZScore(ctx context.Context, key, member string) *FloatCmd
ZUnionStore(ctx context.Context, dest string, store *ZStore) *IntCmd
ZRandMember(ctx context.Context, key string, count int, withScores bool) *StringSliceCmd
PFAdd(ctx context.Context, key string, els ...interface{}) *IntCmd
PFCount(ctx context.Context, keys ...string) *IntCmd
@ -708,6 +722,33 @@ func (c cmdable) GetSet(ctx context.Context, key string, value interface{}) *Str
return cmd
}
// An expiration of zero removes the TTL associated with the key (i.e. GETEX key persist).
// Requires Redis >= 6.2.0.
func (c cmdable) GetEx(ctx context.Context, key string, expiration time.Duration) *StringCmd {
args := make([]interface{}, 0, 4)
args = append(args, "getex", key)
if expiration > 0 {
if usePrecise(expiration) {
args = append(args, "px", formatMs(ctx, expiration))
} else {
args = append(args, "ex", formatSec(ctx, expiration))
}
} else if expiration == 0 {
args = append(args, "persist")
}
cmd := NewStringCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
// redis-server version >= 6.2.0.
func (c cmdable) GetDel(ctx context.Context, key string) *StringCmd {
cmd := NewStringCmd(ctx, "getdel", key)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) Incr(ctx context.Context, key string) *IntCmd {
cmd := NewIntCmd(ctx, "incr", key)
_ = c(ctx, cmd)
@ -1180,6 +1221,21 @@ func (c cmdable) HVals(ctx context.Context, key string) *StringSliceCmd {
return cmd
}
// redis-server version >= 6.2.0.
func (c cmdable) HRandField(ctx context.Context, key string, count int, withValues bool) *StringSliceCmd {
args := make([]interface{}, 0, 4)
// Although count=0 is meaningless, redis accepts count=0.
args = append(args, "hrandfield", key, count)
if withValues {
args = append(args, "withvalues")
}
cmd := NewStringSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c cmdable) BLPop(ctx context.Context, timeout time.Duration, keys ...string) *StringSliceCmd {
@ -1376,6 +1432,12 @@ func (c cmdable) RPushX(ctx context.Context, key string, values ...interface{})
return cmd
}
func (c cmdable) LMove(ctx context.Context, source, destination, srcpos, destpos string) *StringCmd {
cmd := NewStringCmd(ctx, "lmove", source, destination, srcpos, destpos)
_ = c(ctx, cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c cmdable) SAdd(ctx context.Context, key string, members ...interface{}) *IntCmd {
@ -1987,12 +2049,10 @@ func (c cmdable) ZIncrBy(ctx context.Context, key string, increment float64, mem
}
func (c cmdable) ZInterStore(ctx context.Context, destination string, store *ZStore) *IntCmd {
args := make([]interface{}, 3+len(store.Keys))
args[0] = "zinterstore"
args[1] = destination
args[2] = len(store.Keys)
for i, key := range store.Keys {
args[3+i] = key
args := make([]interface{}, 0, 3+len(store.Keys))
args = append(args, "zinterstore", destination, len(store.Keys))
for _, key := range store.Keys {
args = append(args, key)
}
if len(store.Weights) > 0 {
args = append(args, "weights")
@ -2009,6 +2069,18 @@ func (c cmdable) ZInterStore(ctx context.Context, destination string, store *ZSt
return cmd
}
func (c cmdable) ZMScore(ctx context.Context, key string, members ...string) *FloatSliceCmd {
args := make([]interface{}, 2+len(members))
args[0] = "zmscore"
args[1] = key
for i, member := range members {
args[2+i] = member
}
cmd := NewFloatSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) ZPopMax(ctx context.Context, key string, count ...int64) *ZSliceCmd {
args := []interface{}{
"zpopmax",
@ -2223,12 +2295,10 @@ func (c cmdable) ZScore(ctx context.Context, key, member string) *FloatCmd {
}
func (c cmdable) ZUnionStore(ctx context.Context, dest string, store *ZStore) *IntCmd {
args := make([]interface{}, 3+len(store.Keys))
args[0] = "zunionstore"
args[1] = dest
args[2] = len(store.Keys)
for i, key := range store.Keys {
args[3+i] = key
args := make([]interface{}, 0, 3+len(store.Keys))
args = append(args, "zunionstore", dest, len(store.Keys))
for _, key := range store.Keys {
args = append(args, key)
}
if len(store.Weights) > 0 {
args = append(args, "weights")
@ -2246,6 +2316,21 @@ func (c cmdable) ZUnionStore(ctx context.Context, dest string, store *ZStore) *I
return cmd
}
// redis-server version >= 6.2.0.
func (c cmdable) ZRandMember(ctx context.Context, key string, count int, withScores bool) *StringSliceCmd {
args := make([]interface{}, 0, 4)
// Although count=0 is meaningless, redis accepts count=0.
args = append(args, "zrandmember", key, count)
if withScores {
args = append(args, "withscores")
}
cmd := NewStringSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c cmdable) PFAdd(ctx context.Context, key string, els ...interface{}) *IntCmd {

View file

@ -7,7 +7,7 @@ require (
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f
github.com/onsi/ginkgo v1.15.0
github.com/onsi/gomega v1.10.5
go.opentelemetry.io/otel v0.17.0
go.opentelemetry.io/otel/metric v0.17.0
go.opentelemetry.io/otel/trace v0.17.0
go.opentelemetry.io/otel v0.19.0
go.opentelemetry.io/otel/metric v0.19.0
go.opentelemetry.io/otel/trace v0.19.0
)

View file

@ -18,8 +18,8 @@ github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4 h1:L8R9j+yAqZuZjsqh/z+F1NCffTKKLShY6zXTItVIZ8M=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
@ -37,14 +37,14 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opentelemetry.io/otel v0.17.0 h1:6MKOu8WY4hmfpQ4oQn34u6rYhnf2sWf1LXYO/UFm71U=
go.opentelemetry.io/otel v0.17.0/go.mod h1:Oqtdxmf7UtEvL037ohlgnaYa1h7GtMh0NcSd9eqkC9s=
go.opentelemetry.io/otel/metric v0.17.0 h1:t+5EioN8YFXQ2EH+1j6FHCKMUj+57zIDSnSGr/mWuug=
go.opentelemetry.io/otel/metric v0.17.0/go.mod h1:hUz9lH1rNXyEwWAhIWCMFWKhYtpASgSnObJFnU26dJ0=
go.opentelemetry.io/otel/oteltest v0.17.0 h1:TyAihUowTDLqb4+m5ePAsR71xPJaTBJl4KDArIdi9k4=
go.opentelemetry.io/otel/oteltest v0.17.0/go.mod h1:JT/LGFxPwpN+nlsTiinSYjdIx3hZIGqHCpChcIZmdoE=
go.opentelemetry.io/otel/trace v0.17.0 h1:SBOj64/GAOyWzs5F680yW1ITIfJkm6cJWL2YAvuL9xY=
go.opentelemetry.io/otel/trace v0.17.0/go.mod h1:bIujpqg6ZL6xUTubIUgziI1jSaUPthmabA/ygf/6Cfg=
go.opentelemetry.io/otel v0.19.0 h1:Lenfy7QHRXPZVsw/12CWpxX6d/JkrX8wrx2vO8G80Ng=
go.opentelemetry.io/otel v0.19.0/go.mod h1:j9bF567N9EfomkSidSfmMwIwIBuP37AMAIzVW85OxSg=
go.opentelemetry.io/otel/metric v0.19.0 h1:dtZ1Ju44gkJkYvo+3qGqVXmf88tc+a42edOywypengg=
go.opentelemetry.io/otel/metric v0.19.0/go.mod h1:8f9fglJPRnXuskQmKpnad31lcLJ2VmNNqIsx/uIwBSc=
go.opentelemetry.io/otel/oteltest v0.19.0 h1:YVfA0ByROYqTwOxqHVZYZExzEpfZor+MU1rU+ip2v9Q=
go.opentelemetry.io/otel/oteltest v0.19.0/go.mod h1:tI4yxwh8U21v7JD6R3BcA/2+RBoTKFexE/PJ/nSO7IA=
go.opentelemetry.io/otel/trace v0.19.0 h1:1ucYlenXIDA1OlHVLDZKX0ObXV5RLaq06DtUKz5e5zc=
go.opentelemetry.io/otel/trace v0.19.0/go.mod h1:4IXiNextNOpPnRlI4ryK69mn5iC84bjBWZQA5DXz/qg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=

View file

@ -9,7 +9,6 @@ import (
"github.com/go-redis/redis/v8/internal"
"github.com/go-redis/redis/v8/internal/proto"
"go.opentelemetry.io/otel/trace"
)
var noDeadline = time.Time{}
@ -66,41 +65,43 @@ func (cn *Conn) RemoteAddr() net.Addr {
}
func (cn *Conn) WithReader(ctx context.Context, timeout time.Duration, fn func(rd *proto.Reader) error) error {
return internal.WithSpan(ctx, "redis.with_reader", func(ctx context.Context, span trace.Span) error {
if err := cn.netConn.SetReadDeadline(cn.deadline(ctx, timeout)); err != nil {
return internal.RecordError(ctx, span, err)
}
if err := fn(cn.rd); err != nil {
return internal.RecordError(ctx, span, err)
}
return nil
})
ctx, span := internal.StartSpan(ctx, "redis.with_reader")
defer span.End()
if err := cn.netConn.SetReadDeadline(cn.deadline(ctx, timeout)); err != nil {
return internal.RecordError(ctx, span, err)
}
if err := fn(cn.rd); err != nil {
return internal.RecordError(ctx, span, err)
}
return nil
}
func (cn *Conn) WithWriter(
ctx context.Context, timeout time.Duration, fn func(wr *proto.Writer) error,
) error {
return internal.WithSpan(ctx, "redis.with_writer", func(ctx context.Context, span trace.Span) error {
if err := cn.netConn.SetWriteDeadline(cn.deadline(ctx, timeout)); err != nil {
return internal.RecordError(ctx, span, err)
}
ctx, span := internal.StartSpan(ctx, "redis.with_writer")
defer span.End()
if cn.bw.Buffered() > 0 {
cn.bw.Reset(cn.netConn)
}
if err := cn.netConn.SetWriteDeadline(cn.deadline(ctx, timeout)); err != nil {
return internal.RecordError(ctx, span, err)
}
if err := fn(cn.wr); err != nil {
return internal.RecordError(ctx, span, err)
}
if cn.bw.Buffered() > 0 {
cn.bw.Reset(cn.netConn)
}
if err := cn.bw.Flush(); err != nil {
return internal.RecordError(ctx, span, err)
}
if err := fn(cn.wr); err != nil {
return internal.RecordError(ctx, span, err)
}
internal.WritesCounter.Add(ctx, 1)
if err := cn.bw.Flush(); err != nil {
return internal.RecordError(ctx, span, err)
}
return nil
})
internal.WritesCounter.Add(ctx, 1)
return nil
}
func (cn *Conn) Close() error {

View file

@ -228,8 +228,7 @@ func (p *ConnPool) Get(ctx context.Context) (*Conn, error) {
return nil, ErrClosed
}
err := p.waitTurn(ctx)
if err != nil {
if err := p.waitTurn(ctx); err != nil {
return nil, err
}

View file

@ -172,8 +172,7 @@ func (p *StickyConnPool) Reset(ctx context.Context) error {
func (p *StickyConnPool) badConnError() error {
if v := p._badConnError.Load(); v != nil {
err := v.(BadConnError)
if err.wrapped != nil {
if err := v.(BadConnError); err.wrapped != nil {
return err
}
}

View file

@ -83,7 +83,7 @@ func (r *Reader) readLine() ([]byte, error) {
return nil, err
}
full = append(full, b...)
full = append(full, b...) //nolint:makezero
b = full
}
if len(b) <= 2 || b[len(b)-1] != '\n' || b[len(b)-2] != '\r' {

View file

@ -10,7 +10,6 @@ import (
)
// Scan parses bytes `b` to `v` with appropriate type.
// nolint: gocyclo
func Scan(b []byte, v interface{}) error {
switch v := v.(type) {
case nil:

View file

@ -11,17 +11,18 @@ import (
)
func Sleep(ctx context.Context, dur time.Duration) error {
return WithSpan(ctx, "time.Sleep", func(ctx context.Context, span trace.Span) error {
t := time.NewTimer(dur)
defer t.Stop()
_, span := StartSpan(ctx, "time.Sleep")
defer span.End()
select {
case <-t.C:
return nil
case <-ctx.Done():
return ctx.Err()
}
})
t := time.NewTimer(dur)
defer t.Stop()
select {
case <-t.C:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
func ToLower(s string) string {
@ -54,15 +55,11 @@ func isLower(s string) bool {
var tracer = otel.Tracer("github.com/go-redis/redis")
func WithSpan(ctx context.Context, name string, fn func(context.Context, trace.Span) error) error {
func StartSpan(ctx context.Context, name string) (context.Context, trace.Span) {
if span := trace.SpanFromContext(ctx); !span.IsRecording() {
return fn(ctx, span)
return ctx, span
}
ctx, span := tracer.Start(ctx, name)
defer span.End()
return fn(ctx, span)
return tracer.Start(ctx, name)
}
func RecordError(ctx context.Context, span trace.Span, err error) error {

View file

@ -14,8 +14,7 @@ import (
"github.com/go-redis/redis/v8/internal"
"github.com/go-redis/redis/v8/internal/pool"
"go.opentelemetry.io/otel/label"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel/attribute"
)
// Limiter is the interface of a rate limiter or a circuit breaker.
@ -292,20 +291,21 @@ func getUserPassword(u *url.URL) (string, string) {
func newConnPool(opt *Options) *pool.ConnPool {
return pool.NewConnPool(&pool.Options{
Dialer: func(ctx context.Context) (net.Conn, error) {
var conn net.Conn
err := internal.WithSpan(ctx, "redis.dial", func(ctx context.Context, span trace.Span) error {
span.SetAttributes(
label.String("db.connection_string", opt.Addr),
)
ctx, span := internal.StartSpan(ctx, "redis.dial")
defer span.End()
var err error
conn, err = opt.Dialer(ctx, opt.Network, opt.Addr)
if err != nil {
_ = internal.RecordError(ctx, span, err)
}
return err
})
return conn, err
if span.IsRecording() {
span.SetAttributes(
attribute.String("db.connection_string", opt.Addr),
)
}
cn, err := opt.Dialer(ctx, opt.Network, opt.Addr)
if err != nil {
return nil, internal.RecordError(ctx, span, err)
}
return cn, nil
},
PoolSize: opt.PoolSize,
MinIdleConns: opt.MinIdleConns,

View file

@ -10,8 +10,7 @@ import (
"github.com/go-redis/redis/v8/internal"
"github.com/go-redis/redis/v8/internal/pool"
"github.com/go-redis/redis/v8/internal/proto"
"go.opentelemetry.io/otel/label"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel/attribute"
)
// Nil reply returned by Redis when key does not exist.
@ -214,10 +213,7 @@ func (c *baseClient) _getConn(ctx context.Context) (*pool.Conn, error) {
return cn, nil
}
err = internal.WithSpan(ctx, "redis.init_conn", func(ctx context.Context, span trace.Span) error {
return c.initConn(ctx, cn)
})
if err != nil {
if err := c.initConn(ctx, cn); err != nil {
c.connPool.Remove(ctx, cn, err)
if err := errors.Unwrap(err); err != nil {
return nil, err
@ -241,6 +237,9 @@ func (c *baseClient) initConn(ctx context.Context, cn *pool.Conn) error {
return nil
}
ctx, span := internal.StartSpan(ctx, "redis.init_conn")
defer span.End()
connPool := pool.NewSingleConnPool(c.connPool, cn)
conn := newConn(ctx, c.opt, connPool)
@ -288,43 +287,45 @@ func (c *baseClient) releaseConn(ctx context.Context, cn *pool.Conn, err error)
func (c *baseClient) withConn(
ctx context.Context, fn func(context.Context, *pool.Conn) error,
) error {
return internal.WithSpan(ctx, "redis.with_conn", func(ctx context.Context, span trace.Span) error {
cn, err := c.getConn(ctx)
if err != nil {
return err
ctx, span := internal.StartSpan(ctx, "redis.with_conn")
defer span.End()
cn, err := c.getConn(ctx)
if err != nil {
return err
}
if span.IsRecording() {
if remoteAddr := cn.RemoteAddr(); remoteAddr != nil {
span.SetAttributes(attribute.String("net.peer.ip", remoteAddr.String()))
}
}
if span.IsRecording() {
if remoteAddr := cn.RemoteAddr(); remoteAddr != nil {
span.SetAttributes(label.String("net.peer.ip", remoteAddr.String()))
}
}
defer func() {
c.releaseConn(ctx, cn, err)
}()
defer func() {
c.releaseConn(ctx, cn, err)
}()
done := ctx.Done() //nolint:ifshort
done := ctx.Done()
if done == nil {
err = fn(ctx, cn)
return err
}
if done == nil {
err = fn(ctx, cn)
return err
}
errc := make(chan error, 1)
go func() { errc <- fn(ctx, cn) }()
errc := make(chan error, 1)
go func() { errc <- fn(ctx, cn) }()
select {
case <-done:
_ = cn.Close()
// Wait for the goroutine to finish and send something.
<-errc
select {
case <-done:
_ = cn.Close()
// Wait for the goroutine to finish and send something.
<-errc
err = ctx.Err()
return err
case err = <-errc:
return err
}
})
err = ctx.Err()
return err
case err = <-errc:
return err
}
}
func (c *baseClient) process(ctx context.Context, cmd Cmder) error {
@ -332,47 +333,50 @@ func (c *baseClient) process(ctx context.Context, cmd Cmder) error {
for attempt := 0; attempt <= c.opt.MaxRetries; attempt++ {
attempt := attempt
var retry bool
err := internal.WithSpan(ctx, "redis.process", func(ctx context.Context, span trace.Span) error {
if attempt > 0 {
if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
return err
}
}
retryTimeout := uint32(1)
err := c.withConn(ctx, func(ctx context.Context, cn *pool.Conn) error {
err := cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmd(wr, cmd)
})
if err != nil {
return err
}
err = cn.WithReader(ctx, c.cmdTimeout(cmd), cmd.readReply)
if err != nil {
if cmd.readTimeout() == nil {
atomic.StoreUint32(&retryTimeout, 1)
}
return err
}
return nil
})
if err == nil {
return nil
}
retry = shouldRetry(err, atomic.LoadUint32(&retryTimeout) == 1)
return err
})
retry, err := c._process(ctx, cmd, attempt)
if err == nil || !retry {
return err
}
lastErr = err
}
return lastErr
}
func (c *baseClient) _process(ctx context.Context, cmd Cmder, attempt int) (bool, error) {
if attempt > 0 {
if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
return false, err
}
}
retryTimeout := uint32(1)
err := c.withConn(ctx, func(ctx context.Context, cn *pool.Conn) error {
err := cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmd(wr, cmd)
})
if err != nil {
return err
}
err = cn.WithReader(ctx, c.cmdTimeout(cmd), cmd.readReply)
if err != nil {
if cmd.readTimeout() == nil {
atomic.StoreUint32(&retryTimeout, 1)
}
return err
}
return nil
})
if err == nil {
return false, nil
}
retry := shouldRetry(err, atomic.LoadUint32(&retryTimeout) == 1)
return retry, err
}
func (c *baseClient) retryBackoff(attempt int) time.Duration {
return internal.RetryBackoff(attempt, c.opt.MinRetryBackoff, c.opt.MaxRetryBackoff)
}

View file

@ -625,7 +625,7 @@ func parseSlaveAddrs(addrs []interface{}, keepDisconnected bool) []string {
func (c *sentinelFailover) trySwitchMaster(ctx context.Context, addr string) {
c.mu.RLock()
currentAddr := c._masterAddr
currentAddr := c._masterAddr //nolint:ifshort
c.mu.RUnlock()
if addr == currentAddr {
@ -666,15 +666,22 @@ func (c *sentinelFailover) discoverSentinels(ctx context.Context) {
}
for _, sentinel := range sentinels {
vals := sentinel.([]interface{})
var ip, port string
for i := 0; i < len(vals); i += 2 {
key := vals[i].(string)
if key == "name" {
sentinelAddr := vals[i+1].(string)
if !contains(c.sentinelAddrs, sentinelAddr) {
internal.Logger.Printf(ctx, "sentinel: discovered new sentinel=%q for master=%q",
sentinelAddr, c.opt.MasterName)
c.sentinelAddrs = append(c.sentinelAddrs, sentinelAddr)
}
switch key {
case "ip":
ip = vals[i+1].(string)
case "port":
port = vals[i+1].(string)
}
}
if ip != "" && port != "" {
sentinelAddr := net.JoinHostPort(ip, port)
if !contains(c.sentinelAddrs, sentinelAddr) {
internal.Logger.Printf(ctx, "sentinel: discovered new sentinel=%q for master=%q",
sentinelAddr, c.opt.MasterName)
c.sentinelAddrs = append(c.sentinelAddrs, sentinelAddr)
}
}
}

View file

@ -53,6 +53,7 @@ type UniversalOptions struct {
// The sentinel master name.
// Only failover clients.
MasterName string
}
@ -168,9 +169,9 @@ func (o *UniversalOptions) Simple() *Options {
// --------------------------------------------------------------------
// UniversalClient is an abstract client which - based on the provided options -
// can connect to either clusters, or sentinel-backed failover instances
// or simple single-instance servers. This can be useful for testing
// cluster-specific applications locally.
// represents either a ClusterClient, a FailoverClient, or a single-node Client.
// This can be useful for testing cluster-specific applications locally or having different
// clients in different environments.
type UniversalClient interface {
Cmdable
Context() context.Context
@ -190,12 +191,12 @@ var (
_ UniversalClient = (*Ring)(nil)
)
// NewUniversalClient returns a new multi client. The type of client returned depends
// on the following three conditions:
// NewUniversalClient returns a new multi client. The type of the returned client depends
// on the following conditions:
//
// 1. if a MasterName is passed a sentinel-backed FailoverClient will be returned
// 2. if the number of Addrs is two or more, a ClusterClient will be returned
// 3. otherwise, a single-node redis Client will be returned.
// 1. If the MasterName option is specified, a sentinel-backed FailoverClient is returned.
// 2. if the number of Addrs is two or more, a ClusterClient is returned.
// 3. Otherwise, a single-node Client is returned.
func NewUniversalClient(opts *UniversalOptions) UniversalClient {
if opts.MasterName != "" {
return NewFailoverClient(opts.Failover())