Allow common redis and leveldb connections (#12385)

* Allow common redis and leveldb connections

Prevents multiple reopening of redis and leveldb connections to the same
place by sharing connections.

Further allows for more configurable redis connection type using the
redisURI and a leveldbURI scheme.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* add unit-test

Signed-off-by: Andrew Thornton <art27@cantab.net>

* as per @lunny

Signed-off-by: Andrew Thornton <art27@cantab.net>

* add test

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Update modules/cache/cache_redis.go

* Update modules/queue/queue_disk.go

* Update modules/cache/cache_redis.go

* Update modules/cache/cache_redis.go

* Update modules/queue/unique_queue_disk.go

* Update modules/queue/queue_disk.go

* Update modules/queue/unique_queue_disk.go

* Update modules/session/redis.go

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: Lauris BH <lauris@nix.lv>
This commit is contained in:
zeripath 2020-09-27 22:09:46 +01:00 committed by GitHub
parent f404bdde9b
commit 7f8e3192cd
Signed by: GitHub
GPG Key ID: 4AEE18F83AFDEB23
71 changed files with 4927 additions and 3138 deletions

View File

@ -467,8 +467,10 @@ LENGTH = 20
BATCH_LENGTH = 20
; Connection string for redis queues this will store the redis connection string.
CONN_STR = "addrs=127.0.0.1:6379 db=0"
; Provide the suffix of the default redis queue name - specific queues can be overriden within in their [queue.name] sections.
; Provides the suffix of the default redis/disk queue name - specific queues can be overriden within in their [queue.name] sections.
QUEUE_NAME = "_queue"
; Provides the suffix of the default redis/disk unique queue set name - specific queues can be overriden within in their [queue.name] sections.
SET_NAME = "_unique"
; If the queue cannot be created at startup - level queues may need a timeout at startup - wrap the queue:
WRAP_IF_NECESSARY = true
; Attempt to create the wrapped queue at max

View File

@ -308,15 +308,13 @@ relation to port exhaustion.
## Queue (`queue` and `queue.*`)
- `TYPE`: **persistable-channel**: General queue type, currently support: `persistable-channel`, `channel`, `level`, `redis`, `dummy`
- `DATADIR`: **queues/**: Base DataDir for storing persistent and level queues. `DATADIR` for inidividual queues can be set in `queue.name` sections but will default to `DATADIR/`**`name`**.
- `DATADIR`: **queues/**: Base DataDir for storing persistent and level queues. `DATADIR` for individual queues can be set in `queue.name` sections but will default to `DATADIR/`**`name`**.
- `LENGTH`: **20**: Maximal queue size before channel queues block
- `BATCH_LENGTH`: **20**: Batch data before passing to the handler
- `CONN_STR`: **addrs=127.0.0.1:6379 db=0**: Connection string for the redis queue type.
- `QUEUE_NAME`: **_queue**: The suffix for default redis queue name. Individual queues will default to **`name`**`QUEUE_NAME` but can be overriden in the specific `queue.name` section.
- `SET_NAME`: **_unique**: The suffix that will added to the default redis
set name for unique queues. Individual queues will default to
**`name`**`QUEUE_NAME`_`SET_NAME`_ but can be overridden in the specific
`queue.name` section.
- `CONN_STR`: **redis://127.0.0.1:6379/0**: Connection string for the redis queue type. Options can be set using query params. Similarly LevelDB options can also be set using: **leveldb://relative/path?option=value** or **leveldb:///absolute/path?option=value**
- `QUEUE_NAME`: **_queue**: The suffix for default redis and disk queue name. Individual queues will default to **`name`**`QUEUE_NAME` but can be overriden in the specific `queue.name` section.
- `SET_NAME`: **_unique**: The suffix that will be added to the default redis and disk queue `set` name for unique queues. Individual queues will default to
**`name`**`QUEUE_NAME`_`SET_NAME`_ but can be overridden in the specific `queue.name` section.
- `WRAP_IF_NECESSARY`: **true**: Will wrap queues with a timeoutable queue if the selected queue is not ready to be created - (Only relevant for the level queue.)
- `MAX_ATTEMPTS`: **10**: Maximum number of attempts to create the wrapped queue
- `TIMEOUT`: **GRACEFUL_HAMMER_TIME + 30s**: Timeout the creation of the wrapped queue if it takes longer than this to create.
@ -459,7 +457,7 @@ set name for unique queues. Individual queues will default to
- `ADAPTER`: **memory**: Cache engine adapter, either `memory`, `redis`, or `memcache`.
- `INTERVAL`: **60**: Garbage Collection interval (sec), for memory cache only.
- `HOST`: **\<empty\>**: Connection string for `redis` and `memcache`.
- Redis: `network=tcp,addr=127.0.0.1:6379,password=macaron,db=0,pool_size=100,idle_timeout=180`
- Redis: `redis://:macaron@127.0.0.1:6379/0?pool_size=100&idle_timeout=180s`
- Memcache: `127.0.0.1:9090;127.0.0.1:9091`
- `ITEM_TTL`: **16h**: Time to keep items in cache if not used, Setting it to 0 disables caching.
@ -708,7 +706,7 @@ Task queue configuration has been moved to `queue.task`. However, the below conf
- `QUEUE_TYPE`: **channel**: Task queue type, could be `channel` or `redis`.
- `QUEUE_LENGTH`: **1000**: Task queue length, available only when `QUEUE_TYPE` is `channel`.
- `QUEUE_CONN_STR`: **addrs=127.0.0.1:6379 db=0**: Task queue connection string, available only when `QUEUE_TYPE` is `redis`. If redis needs a password, use `addrs=127.0.0.1:6379 password=123 db=0`.
- `QUEUE_CONN_STR`: **redis://127.0.0.1:6379/0**: Task queue connection string, available only when `QUEUE_TYPE` is `redis`. If redis needs a password, use `redis://123@127.0.0.1:6379/0`.
## Migrations (`migrations`)

3
go.mod
View File

@ -38,7 +38,7 @@ require (
github.com/go-enry/go-enry/v2 v2.5.2
github.com/go-git/go-billy/v5 v5.0.0
github.com/go-git/go-git/v5 v5.1.0
github.com/go-redis/redis v6.15.2+incompatible
github.com/go-redis/redis/v7 v7.4.0
github.com/go-sql-driver/mysql v1.5.0
github.com/go-swagger/go-swagger v0.25.0
github.com/go-testfixtures/testfixtures/v3 v3.4.0
@ -88,6 +88,7 @@ require (
github.com/shurcooL/httpfs v0.0.0-20190527155220-6a4d4a70508b // indirect
github.com/shurcooL/vfsgen v0.0.0-20181202132449-6a9ea43bcacd
github.com/stretchr/testify v1.6.1
github.com/syndtr/goleveldb v1.0.0
github.com/tecbot/gorocksdb v0.0.0-20181010114359-8752a9433481 // indirect
github.com/tinylib/msgp v1.1.2 // indirect
github.com/tstranex/u2f v1.0.0

7
go.sum
View File

@ -342,6 +342,8 @@ github.com/go-openapi/validate v0.19.10 h1:tG3SZ5DC5KF4cyt7nqLVcQXGj5A7mpaYkAcNP
github.com/go-openapi/validate v0.19.10/go.mod h1:RKEZTUWDkxKQxN2jDT7ZnZi2bhZlbNMAuKvKB+IaGx8=
github.com/go-redis/redis v6.15.2+incompatible h1:9SpNVG76gr6InJGxoZ6IuuxaCOQwDAhzyXg+Bs+0Sb4=
github.com/go-redis/redis v6.15.2+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4=
github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-sql-driver/mysql v1.4.1 h1:g24URVg0OFbNUTx9qqY1IRZ9D9z3iPyi5zKhQZpNwpA=
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-sql-driver/mysql v1.5.0 h1:ozyZYNQW3x3HtqT1jira07DN2PArx2v7/mN66gGcHOs=
@ -730,9 +732,13 @@ github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
@ -1014,6 +1020,7 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=

View File

@ -13,7 +13,6 @@ import (
mc "gitea.com/macaron/cache"
_ "gitea.com/macaron/cache/memcache" // memcache plugin for cache
_ "gitea.com/macaron/cache/redis"
)
var (

View File

@ -1,35 +1,23 @@
// Copyright 2013 Beego Authors
// Copyright 2014 The Macaron Authors
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package cache
import (
"fmt"
"strings"
"time"
"github.com/go-redis/redis"
"github.com/unknwon/com"
"gopkg.in/ini.v1"
"code.gitea.io/gitea/modules/nosql"
"gitea.com/macaron/cache"
"github.com/go-redis/redis/v7"
"github.com/unknwon/com"
)
// RedisCacher represents a redis cache adapter implementation.
type RedisCacher struct {
c *redis.Client
c redis.UniversalClient
prefix string
hsetName string
occupyMode bool
@ -112,7 +100,7 @@ func (c *RedisCacher) IsExist(key string) bool {
// Flush deletes all cached data.
func (c *RedisCacher) Flush() error {
if c.occupyMode {
return c.c.FlushDb().Err()
return c.c.FlushDB().Err()
}
keys, err := c.c.HKeys(c.hsetName).Result()
@ -131,46 +119,20 @@ func (c *RedisCacher) StartAndGC(opts cache.Options) error {
c.hsetName = "MacaronCache"
c.occupyMode = opts.OccupyMode
cfg, err := ini.Load([]byte(strings.Replace(opts.AdapterConfig, ",", "\n", -1)))
if err != nil {
return err
}
uri := nosql.ToRedisURI(opts.AdapterConfig)
opt := &redis.Options{
Network: "tcp",
}
for k, v := range cfg.Section("").KeysHash() {
c.c = nosql.GetManager().GetRedisClient(uri.String())
for k, v := range uri.Query() {
switch k {
case "network":
opt.Network = v
case "addr":
opt.Addr = v
case "password":
opt.Password = v
case "db":
opt.DB = com.StrTo(v).MustInt()
case "pool_size":
opt.PoolSize = com.StrTo(v).MustInt()
case "idle_timeout":
opt.IdleTimeout, err = time.ParseDuration(v + "s")
if err != nil {
return fmt.Errorf("error parsing idle timeout: %v", err)
}
case "hset_name":
c.hsetName = v
c.hsetName = v[0]
case "prefix":
c.prefix = v
default:
return fmt.Errorf("session/redis: unsupported option '%s'", k)
c.prefix = v[0]
}
}
c.c = redis.NewClient(opt)
if err = c.c.Ping().Err(); err != nil {
return err
}
return nil
return c.c.Ping().Err()
}
func init() {

25
modules/nosql/leveldb.go Normal file
View File

@ -0,0 +1,25 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package nosql
import "net/url"
// ToLevelDBURI converts old style connections to a LevelDBURI
//
// A LevelDBURI matches the pattern:
//
// leveldb://path[?[option=value]*]
//
// We have previously just provided the path but this prevent other options
func ToLevelDBURI(connection string) *url.URL {
uri, err := url.Parse(connection)
if err == nil && uri.Scheme == "leveldb" {
return uri
}
uri, _ = url.Parse("leveldb://common")
uri.Host = ""
uri.Path = connection
return uri
}

71
modules/nosql/manager.go Normal file
View File

@ -0,0 +1,71 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package nosql
import (
"strconv"
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/syndtr/goleveldb/leveldb"
)
var manager *Manager
// Manager is the nosql connection manager
type Manager struct {
mutex sync.Mutex
RedisConnections map[string]*redisClientHolder
LevelDBConnections map[string]*levelDBHolder
}
type redisClientHolder struct {
redis.UniversalClient
name []string
count int64
}
func (r *redisClientHolder) Close() error {
return manager.CloseRedisClient(r.name[0])
}
type levelDBHolder struct {
name []string
count int64
db *leveldb.DB
}
func init() {
_ = GetManager()
}
// GetManager returns a Manager and initializes one as singleton is there's none yet
func GetManager() *Manager {
if manager == nil {
manager = &Manager{
RedisConnections: make(map[string]*redisClientHolder),
LevelDBConnections: make(map[string]*levelDBHolder),
}
}
return manager
}
func valToTimeDuration(vs []string) (result time.Duration) {
var err error
for _, v := range vs {
result, err = time.ParseDuration(v)
if err != nil {
var val int
val, err = strconv.Atoi(v)
result = time.Duration(val)
}
if err == nil {
return
}
}
return
}

View File

@ -0,0 +1,151 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package nosql
import (
"path"
"strconv"
"strings"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/opt"
)
// CloseLevelDB closes a levelDB
func (m *Manager) CloseLevelDB(connection string) error {
m.mutex.Lock()
defer m.mutex.Unlock()
db, ok := m.LevelDBConnections[connection]
if !ok {
connection = ToLevelDBURI(connection).String()
db, ok = m.LevelDBConnections[connection]
}
if !ok {
return nil
}
db.count--
if db.count > 0 {
return nil
}
for _, name := range db.name {
delete(m.LevelDBConnections, name)
}
return db.db.Close()
}
// GetLevelDB gets a levelDB for a particular connection
func (m *Manager) GetLevelDB(connection string) (*leveldb.DB, error) {
m.mutex.Lock()
defer m.mutex.Unlock()
db, ok := m.LevelDBConnections[connection]
if ok {
db.count++
return db.db, nil
}
dataDir := connection
uri := ToLevelDBURI(connection)
db = &levelDBHolder{
name: []string{connection, uri.String()},
}
dataDir = path.Join(uri.Host, uri.Path)
opts := &opt.Options{}
for k, v := range uri.Query() {
switch replacer.Replace(strings.ToLower(k)) {
case "blockcachecapacity":
opts.BlockCacheCapacity, _ = strconv.Atoi(v[0])
case "blockcacheevictremoved":
opts.BlockCacheEvictRemoved, _ = strconv.ParseBool(v[0])
case "blockrestartinterval":
opts.BlockRestartInterval, _ = strconv.Atoi(v[0])
case "blocksize":
opts.BlockSize, _ = strconv.Atoi(v[0])
case "compactionexpandlimitfactor":
opts.CompactionExpandLimitFactor, _ = strconv.Atoi(v[0])
case "compactiongpoverlapsfactor":
opts.CompactionGPOverlapsFactor, _ = strconv.Atoi(v[0])
case "compactionl0trigger":
opts.CompactionL0Trigger, _ = strconv.Atoi(v[0])
case "compactionsourcelimitfactor":
opts.CompactionSourceLimitFactor, _ = strconv.Atoi(v[0])
case "compactiontablesize":
opts.CompactionTableSize, _ = strconv.Atoi(v[0])
case "compactiontablesizemultiplier":
opts.CompactionTableSizeMultiplier, _ = strconv.ParseFloat(v[0], 64)
case "compactiontablesizemultiplierperlevel":
for _, val := range v {
f, _ := strconv.ParseFloat(val, 64)
opts.CompactionTableSizeMultiplierPerLevel = append(opts.CompactionTableSizeMultiplierPerLevel, f)
}
case "compactiontotalsize":
opts.CompactionTotalSize, _ = strconv.Atoi(v[0])
case "compactiontotalsizemultiplier":
opts.CompactionTotalSizeMultiplier, _ = strconv.ParseFloat(v[0], 64)
case "compactiontotalsizemultiplierperlevel":
for _, val := range v {
f, _ := strconv.ParseFloat(val, 64)
opts.CompactionTotalSizeMultiplierPerLevel = append(opts.CompactionTotalSizeMultiplierPerLevel, f)
}
case "compression":
val, _ := strconv.Atoi(v[0])
opts.Compression = opt.Compression(val)
case "disablebufferpool":
opts.DisableBufferPool, _ = strconv.ParseBool(v[0])
case "disableblockcache":
opts.DisableBlockCache, _ = strconv.ParseBool(v[0])
case "disablecompactionbackoff":
opts.DisableCompactionBackoff, _ = strconv.ParseBool(v[0])
case "disablelargebatchtransaction":
opts.DisableLargeBatchTransaction, _ = strconv.ParseBool(v[0])
case "errorifexist":
opts.ErrorIfExist, _ = strconv.ParseBool(v[0])
case "errorifmissing":
opts.ErrorIfMissing, _ = strconv.ParseBool(v[0])
case "iteratorsamplingrate":
opts.IteratorSamplingRate, _ = strconv.Atoi(v[0])
case "nosync":
opts.NoSync, _ = strconv.ParseBool(v[0])
case "nowritemerge":
opts.NoWriteMerge, _ = strconv.ParseBool(v[0])
case "openfilescachecapacity":
opts.OpenFilesCacheCapacity, _ = strconv.Atoi(v[0])
case "readonly":
opts.ReadOnly, _ = strconv.ParseBool(v[0])
case "strict":
val, _ := strconv.Atoi(v[0])
opts.Strict = opt.Strict(val)
case "writebuffer":
opts.WriteBuffer, _ = strconv.Atoi(v[0])
case "writel0pausetrigger":
opts.WriteL0PauseTrigger, _ = strconv.Atoi(v[0])
case "writel0slowdowntrigger":
opts.WriteL0SlowdownTrigger, _ = strconv.Atoi(v[0])
case "clientname":
db.name = append(db.name, v[0])
}
}
var err error
db.db, err = leveldb.OpenFile(dataDir, opts)
if err != nil {
if !errors.IsCorrupted(err) {
return nil, err
}
db.db, err = leveldb.RecoverFile(dataDir, opts)
if err != nil {
return nil, err
}
}
for _, name := range db.name {
m.LevelDBConnections[name] = db
}
db.count++
return db.db, nil
}

View File

@ -0,0 +1,205 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package nosql
import (
"crypto/tls"
"path"
"strconv"
"strings"
"github.com/go-redis/redis/v7"
)
var replacer = strings.NewReplacer("_", "", "-", "")
// CloseRedisClient closes a redis client
func (m *Manager) CloseRedisClient(connection string) error {
m.mutex.Lock()
defer m.mutex.Unlock()
client, ok := m.RedisConnections[connection]
if !ok {
connection = ToRedisURI(connection).String()
client, ok = m.RedisConnections[connection]
}
if !ok {
return nil
}
client.count--
if client.count > 0 {
return nil
}
for _, name := range client.name {
delete(m.RedisConnections, name)
}
return client.UniversalClient.Close()
}
// GetRedisClient gets a redis client for a particular connection
func (m *Manager) GetRedisClient(connection string) redis.UniversalClient {
m.mutex.Lock()
defer m.mutex.Unlock()
client, ok := m.RedisConnections[connection]
if ok {
client.count++
return client
}
uri := ToRedisURI(connection)
client, ok = m.RedisConnections[uri.String()]
if ok {
client.count++
return client
}
client = &redisClientHolder{
name: []string{connection, uri.String()},
}
opts := &redis.UniversalOptions{}
tlsConfig := &tls.Config{}
// Handle username/password
if password, ok := uri.User.Password(); ok {
opts.Password = password
// Username does not appear to be handled by redis.Options
opts.Username = uri.User.Username()
} else if uri.User.Username() != "" {
// assume this is the password
opts.Password = uri.User.Username()
}
// Now handle the uri query sets
for k, v := range uri.Query() {
switch replacer.Replace(strings.ToLower(k)) {
case "addr":
opts.Addrs = append(opts.Addrs, v...)
case "addrs":
opts.Addrs = append(opts.Addrs, strings.Split(v[0], ",")...)
case "username":
opts.Username = v[0]
case "password":
opts.Password = v[0]
case "database":
fallthrough
case "db":
opts.DB, _ = strconv.Atoi(v[0])
case "maxretries":
opts.MaxRetries, _ = strconv.Atoi(v[0])
case "minretrybackoff":
opts.MinRetryBackoff = valToTimeDuration(v)
case "maxretrybackoff":
opts.MaxRetryBackoff = valToTimeDuration(v)
case "timeout":
timeout := valToTimeDuration(v)
if timeout != 0 {
if opts.DialTimeout == 0 {
opts.DialTimeout = timeout
}
if opts.ReadTimeout == 0 {
opts.ReadTimeout = timeout
}
}
case "dialtimeout":
opts.DialTimeout = valToTimeDuration(v)
case "readtimeout":
opts.ReadTimeout = valToTimeDuration(v)
case "writetimeout":
opts.WriteTimeout = valToTimeDuration(v)
case "poolsize":
opts.PoolSize, _ = strconv.Atoi(v[0])
case "minidleconns":
opts.MinIdleConns, _ = strconv.Atoi(v[0])
case "pooltimeout":
opts.PoolTimeout = valToTimeDuration(v)
case "idletimeout":
opts.IdleTimeout = valToTimeDuration(v)
case "idlecheckfrequency":
opts.IdleCheckFrequency = valToTimeDuration(v)
case "maxredirects":
opts.MaxRedirects, _ = strconv.Atoi(v[0])
case "readonly":
opts.ReadOnly, _ = strconv.ParseBool(v[0])
case "routebylatency":
opts.RouteByLatency, _ = strconv.ParseBool(v[0])
case "routerandomly":
opts.RouteRandomly, _ = strconv.ParseBool(v[0])
case "sentinelmasterid":
fallthrough
case "mastername":
opts.MasterName = v[0]
case "skipverify":
fallthrough
case "insecureskipverify":
insecureSkipVerify, _ := strconv.ParseBool(v[0])
tlsConfig.InsecureSkipVerify = insecureSkipVerify
case "clientname":
client.name = append(client.name, v[0])
}
}
switch uri.Scheme {
case "redis+sentinels":
fallthrough
case "rediss+sentinel":
opts.TLSConfig = tlsConfig
fallthrough
case "redis+sentinel":
if uri.Host != "" {
opts.Addrs = append(opts.Addrs, strings.Split(uri.Host, ",")...)
}
if uri.Path != "" {
if db, err := strconv.Atoi(uri.Path); err == nil {
opts.DB = db
}
}
client.UniversalClient = redis.NewFailoverClient(opts.Failover())
case "redis+clusters":
fallthrough
case "rediss+cluster":
opts.TLSConfig = tlsConfig
fallthrough
case "redis+cluster":
if uri.Host != "" {
opts.Addrs = append(opts.Addrs, strings.Split(uri.Host, ",")...)
}
if uri.Path != "" {
if db, err := strconv.Atoi(uri.Path); err == nil {
opts.DB = db
}
}
client.UniversalClient = redis.NewClusterClient(opts.Cluster())
case "redis+socket":
simpleOpts := opts.Simple()
simpleOpts.Network = "unix"
simpleOpts.Addr = path.Join(uri.Host, uri.Path)
client.UniversalClient = redis.NewClient(simpleOpts)
case "rediss":
opts.TLSConfig = tlsConfig
fallthrough
case "redis":
if uri.Host != "" {
opts.Addrs = append(opts.Addrs, strings.Split(uri.Host, ",")...)
}
if uri.Path != "" {
if db, err := strconv.Atoi(uri.Path); err == nil {
opts.DB = db
}
}
client.UniversalClient = redis.NewClient(opts.Simple())
default:
return nil
}
for _, name := range client.name {
m.RedisConnections[name] = client
}
client.count++
return client
}

102
modules/nosql/redis.go Normal file
View File

@ -0,0 +1,102 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package nosql
import (
"net/url"
"strconv"
"strings"
)
// The file contains common redis connection functions
// ToRedisURI converts old style connections to a RedisURI
//
// A RedisURI matches the pattern:
//
// redis://[username:password@]host[:port][/database][?[option=value]*]
// rediss://[username:password@]host[:port][/database][?[option=value]*]
// redis+socket://[username:password@]path[/database][?[option=value]*]
// redis+sentinel://[password@]host1 [: port1][, host2 [:port2]][, hostN [:portN]][/ database][?[option=value]*]
// redis+cluster://[password@]host1 [: port1][, host2 [:port2]][, hostN [:portN]][/ database][?[option=value]*]
//
// We have previously used a URI like:
// addrs=127.0.0.1:6379 db=0
// network=tcp,addr=127.0.0.1:6379,password=macaron,db=0,pool_size=100,idle_timeout=180
//
// We need to convert this old style to the new style
func ToRedisURI(connection string) *url.URL {
uri, err := url.Parse(connection)
if err == nil && strings.HasPrefix(uri.Scheme, "redis") {
// OK we're going to assume that this is a reasonable redis URI
return uri
}
// Let's set a nice default
uri, _ = url.Parse("redis://127.0.0.1:6379/0")
network := "tcp"
query := uri.Query()
// OK so there are two types: Space delimited and Comma delimited
// Let's assume that we have a space delimited string - as this is the most common
fields := strings.Fields(connection)
if len(fields) == 1 {
// It's a comma delimited string, then...
fields = strings.Split(connection, ",")
}
for _, f := range fields {
items := strings.SplitN(f, "=", 2)
if len(items) < 2 {
continue
}
switch strings.ToLower(items[0]) {
case "network":
if items[1] == "unix" {
uri.Scheme = "redis+socket"
}
network = items[1]
case "addrs":
uri.Host = items[1]
// now we need to handle the clustering
if strings.Contains(items[1], ",") && network == "tcp" {
uri.Scheme = "redis+cluster"
}
case "addr":
uri.Host = items[1]
case "password":
uri.User = url.UserPassword(uri.User.Username(), items[1])
case "username":
password, set := uri.User.Password()
if !set {
uri.User = url.User(items[1])
} else {
uri.User = url.UserPassword(items[1], password)
}
case "db":
uri.Path = "/" + items[1]
case "idle_timeout":
_, err := strconv.Atoi(items[1])
if err == nil {
query.Add("idle_timeout", items[1]+"s")
} else {
query.Add("idle_timeout", items[1])
}
default:
// Other options become query params
query.Add(items[0], items[1])
}
}
// Finally we need to fix up the Host if we have a unix port
if uri.Scheme == "redis+socket" {
query.Set("db", uri.Path)
uri.Path = uri.Host
uri.Host = ""
}
uri.RawQuery = query.Encode()
return uri
}

View File

@ -0,0 +1,35 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package nosql
import (
"testing"
)
func TestToRedisURI(t *testing.T) {
tests := []struct {
name string
connection string
want string
}{
{
name: "old_default",
connection: "addrs=127.0.0.1:6379 db=0",
want: "redis://127.0.0.1:6379/0",
},
{
name: "old_macaron_session_default",
connection: "network=tcp,addr=127.0.0.1:6379,password=macaron,db=0,pool_size=100,idle_timeout=180",
want: "redis://:macaron@127.0.0.1:6379/0?idle_timeout=180s&pool_size=100",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := ToRedisURI(tt.connection); got == nil || got.String() != tt.want {
t.Errorf(`ToRedisURI(%q) = %s, want %s`, tt.connection, got.String(), tt.want)
}
})
}
}

View File

@ -5,6 +5,8 @@
package queue
import (
"code.gitea.io/gitea/modules/nosql"
"gitea.com/lunny/levelqueue"
)
@ -14,7 +16,9 @@ const LevelQueueType Type = "level"
// LevelQueueConfiguration is the configuration for a LevelQueue
type LevelQueueConfiguration struct {
ByteFIFOQueueConfiguration
DataDir string
DataDir string
ConnectionString string
QueueName string
}
// LevelQueue implements a disk library queue
@ -30,7 +34,11 @@ func NewLevelQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error)
}
config := configInterface.(LevelQueueConfiguration)
byteFIFO, err := NewLevelQueueByteFIFO(config.DataDir)
if len(config.ConnectionString) == 0 {
config.ConnectionString = config.DataDir
}
byteFIFO, err := NewLevelQueueByteFIFO(config.ConnectionString, config.QueueName)
if err != nil {
return nil, err
}
@ -51,18 +59,25 @@ var _ (ByteFIFO) = &LevelQueueByteFIFO{}
// LevelQueueByteFIFO represents a ByteFIFO formed from a LevelQueue
type LevelQueueByteFIFO struct {
internal *levelqueue.Queue
internal *levelqueue.Queue
connection string
}
// NewLevelQueueByteFIFO creates a ByteFIFO formed from a LevelQueue
func NewLevelQueueByteFIFO(dataDir string) (*LevelQueueByteFIFO, error) {
internal, err := levelqueue.Open(dataDir)
func NewLevelQueueByteFIFO(connection, prefix string) (*LevelQueueByteFIFO, error) {
db, err := nosql.GetManager().GetLevelDB(connection)
if err != nil {
return nil, err
}
internal, err := levelqueue.NewQueue(db, []byte(prefix), false)
if err != nil {
return nil, err
}
return &LevelQueueByteFIFO{
internal: internal,
connection: connection,
internal: internal,
}, nil
}
@ -87,7 +102,9 @@ func (fifo *LevelQueueByteFIFO) Pop() ([]byte, error) {
// Close this fifo
func (fifo *LevelQueueByteFIFO) Close() error {
return fifo.internal.Close()
err := fifo.internal.Close()
_ = nosql.GetManager().CloseLevelDB(fifo.connection)
return err
}
// Len returns the length of the fifo

View File

@ -5,12 +5,10 @@
package queue
import (
"errors"
"strings"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/nosql"
"github.com/go-redis/redis"
"github.com/go-redis/redis/v7"
)
// RedisQueueType is the type for redis queue
@ -75,11 +73,8 @@ type RedisByteFIFO struct {
// RedisByteFIFOConfiguration is the configuration for the RedisByteFIFO
type RedisByteFIFOConfiguration struct {
Network string
Addresses string
Password string
DBIndex int
QueueName string
ConnectionString string
QueueName string
}
// NewRedisByteFIFO creates a ByteFIFO formed from a redisClient
@ -87,21 +82,7 @@ func NewRedisByteFIFO(config RedisByteFIFOConfiguration) (*RedisByteFIFO, error)
fifo := &RedisByteFIFO{
queueName: config.QueueName,
}
dbs := strings.Split(config.Addresses, ",")
if len(dbs) == 0 {
return nil, errors.New("no redis host specified")
} else if len(dbs) == 1 {
fifo.client = redis.NewClient(&redis.Options{
Network: config.Network,
Addr: strings.TrimSpace(dbs[0]), // use default Addr
Password: config.Password, // no password set
DB: config.DBIndex, // use default DB
})
} else {
fifo.client = redis.NewClusterClient(&redis.ClusterOptions{
Addrs: dbs,
})
}
fifo.client = nosql.GetManager().GetRedisClient(config.ConnectionString)
if err := fifo.client.Ping().Err(); err != nil {
return nil, err
}

View File

@ -5,6 +5,8 @@
package queue
import (
"code.gitea.io/gitea/modules/nosql"
"gitea.com/lunny/levelqueue"
)
@ -14,7 +16,9 @@ const LevelUniqueQueueType Type = "unique-level"
// LevelUniqueQueueConfiguration is the configuration for a LevelUniqueQueue
type LevelUniqueQueueConfiguration struct {
ByteFIFOQueueConfiguration
DataDir string
DataDir string
ConnectionString string
QueueName string
}
// LevelUniqueQueue implements a disk library queue
@ -34,7 +38,11 @@ func NewLevelUniqueQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue,
}
config := configInterface.(LevelUniqueQueueConfiguration)
byteFIFO, err := NewLevelUniqueQueueByteFIFO(config.DataDir)
if len(config.ConnectionString) == 0 {
config.ConnectionString = config.DataDir
}
byteFIFO, err := NewLevelUniqueQueueByteFIFO(config.ConnectionString, config.QueueName)
if err != nil {
return nil, err
}
@ -55,18 +63,25 @@ var _ (UniqueByteFIFO) = &LevelUniqueQueueByteFIFO{}
// LevelUniqueQueueByteFIFO represents a ByteFIFO formed from a LevelUniqueQueue
type LevelUniqueQueueByteFIFO struct {
internal *levelqueue.UniqueQueue
internal *levelqueue.UniqueQueue
connection string
}
// NewLevelUniqueQueueByteFIFO creates a new ByteFIFO formed from a LevelUniqueQueue
func NewLevelUniqueQueueByteFIFO(dataDir string) (*LevelUniqueQueueByteFIFO, error) {
internal, err := levelqueue.OpenUnique(dataDir)
func NewLevelUniqueQueueByteFIFO(connection, prefix string) (*LevelUniqueQueueByteFIFO, error) {
db, err := nosql.GetManager().GetLevelDB(connection)
if err != nil {
return nil, err
}
internal, err := levelqueue.NewUniqueQueue(db, []byte(prefix), []byte(prefix+"-unique"), false)
if err != nil {
return nil, err
}
return &LevelUniqueQueueByteFIFO{
internal: internal,
connection: connection,
internal: internal,
}, nil
}
@ -96,7 +111,9 @@ func (fifo *LevelUniqueQueueByteFIFO) Has(data []byte) (bool, error) {
// Close this fifo
func (fifo *LevelUniqueQueueByteFIFO) Close() error {
return fifo.internal.Close()
err := fifo.internal.Close()
_ = nosql.GetManager().CloseLevelDB(fifo.connection)
return err
}
func init() {

View File

@ -4,7 +4,7 @@
package queue
import "github.com/go-redis/redis"
import "github.com/go-redis/redis/v7"
// RedisUniqueQueueType is the type for redis queue
const RedisUniqueQueueType Type = "unique-redis"

View File

@ -1,5 +1,6 @@
// Copyright 2013 Beego Authors
// Copyright 2014 The Macaron Authors
// Copyright 2020 The Gitea Authors. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
@ -17,19 +18,18 @@ package session
import (
"fmt"
"strings"
"sync"
"time"
"code.gitea.io/gitea/modules/nosql"
"gitea.com/macaron/session"
"github.com/go-redis/redis"
"github.com/unknwon/com"
"gopkg.in/ini.v1"
"github.com/go-redis/redis/v7"
)
// RedisStore represents a redis session store implementation.
type RedisStore struct {
c *redis.Client
c redis.UniversalClient
prefix, sid string
duration time.Duration
lock sync.RWMutex
@ -37,7 +37,7 @@ type RedisStore struct {
}
// NewRedisStore creates and returns a redis session store.
func NewRedisStore(c *redis.Client, prefix, sid string, dur time.Duration, kv map[interface{}]interface{}) *RedisStore {
func NewRedisStore(c redis.UniversalClient, prefix, sid string, dur time.Duration, kv map[interface{}]interface{}) *RedisStore {
return &RedisStore{
c: c,
prefix: prefix,
@ -104,7 +104,7 @@ func (s *RedisStore) Flush() error {
// RedisProvider represents a redis session provider implementation.
type RedisProvider struct {
c *redis.Client
c redis.UniversalClient
duration time.Duration
prefix string
}
@ -117,39 +117,16 @@ func (p *RedisProvider) Init(maxlifetime int64, configs string) (err error) {
return err
}
cfg, err := ini.Load([]byte(strings.Replace(configs, ",", "\n", -1)))
if err != nil {
return err
}
uri := nosql.ToRedisURI(configs)
opt := &redis.Options{
Network: "tcp",
}
for k, v := range cfg.Section("").KeysHash() {
for k, v := range uri.Query() {
switch k {
case "network":
opt.Network = v
case "addr":
opt.Addr = v
case "password":
opt.Password = v
case "db":
opt.DB = com.StrTo(v).MustInt()
case "pool_size":
opt.PoolSize = com.StrTo(v).MustInt()
case "idle_timeout":
opt.IdleTimeout, err = time.ParseDuration(v + "s")
if err != nil {
return fmt.Errorf("error parsing idle timeout: %v", err)
}
case "prefix":
p.prefix = v
default:
return fmt.Errorf("session/redis: unsupported option '%s'", k)
p.prefix = v[0]
}
}
p.c = redis.NewClient(opt)
p.c = nosql.GetManager().GetRedisClient(uri.String())
return p.c.Ping().Err()
}
@ -228,11 +205,11 @@ func (p *RedisProvider) Regenerate(oldsid, sid string) (_ session.RawStore, err
// Count counts and returns number of sessions.
func (p *RedisProvider) Count() int {
return int(p.c.DbSize().Val())
return int(p.c.DBSize().Val())
}
// GC calls GC to clean expired sessions.
func (_ *RedisProvider) GC() {}
func (*RedisProvider) GC() {}
func init() {
session.Register("redis", &RedisProvider{})

View File

@ -15,7 +15,6 @@ import (
mysql "gitea.com/macaron/session/mysql"
nodb "gitea.com/macaron/session/nodb"
postgres "gitea.com/macaron/session/postgres"
redis "gitea.com/macaron/session/redis"
)
// VirtualSessionProvider represents a shadowed session provider implementation.
@ -40,7 +39,7 @@ func (o *VirtualSessionProvider) Init(gclifetime int64, config string) error {
case "file":
o.provider = &session.FileProvider{}
case "redis":
o.provider = &redis.RedisProvider{}
o.provider = &RedisProvider{}
case "mysql":
o.provider = &mysql.MysqlProvider{}
case "postgres":

View File

@ -1 +0,0 @@
ignore

View File

@ -1 +0,0 @@
ignore

View File

@ -1,19 +0,0 @@
sudo: false
language: go
services:
- redis-server
go:
- 1.9.x
- 1.10.x
- 1.11.x
- tip
matrix:
allow_failures:
- go: tip
install:
- go get github.com/onsi/ginkgo
- go get github.com/onsi/gomega

View File

@ -1,25 +0,0 @@
# Changelog
## Unreleased
- Cluster and Ring pipelines process commands for each node in its own goroutine.
## 6.14
- Added Options.MinIdleConns.
- Added Options.MaxConnAge.
- PoolStats.FreeConns is renamed to PoolStats.IdleConns.
- Add Client.Do to simplify creating custom commands.
- Add Cmd.String, Cmd.Int, Cmd.Int64, Cmd.Uint64, Cmd.Float64, and Cmd.Bool helpers.
- Lower memory usage.
## v6.13
- Ring got new options called `HashReplicas` and `Hash`. It is recommended to set `HashReplicas = 1000` for better keys distribution between shards.
- Cluster client was optimized to use much less memory when reloading cluster state.
- PubSub.ReceiveMessage is re-worked to not use ReceiveTimeout so it does not lose data when timeout occurres. In most cases it is recommended to use PubSub.Channel instead.
- Dialer.KeepAlive is set to 5 minutes by default.
## v6.12
- ClusterClient got new option called `ClusterSlots` which allows to build cluster of normal Redis Servers that don't have cluster mode enabled. See https://godoc.org/github.com/go-redis/redis#example-NewClusterClient--ManualSetup

View File

@ -1,89 +0,0 @@
package internal
import (
"io"
"net"
"strings"
"github.com/go-redis/redis/internal/proto"
)
func IsRetryableError(err error, retryTimeout bool) bool {
if err == nil {
return false
}
if err == io.EOF {
return true
}
if netErr, ok := err.(net.Error); ok {
if netErr.Timeout() {
return retryTimeout
}
return true
}
s := err.Error()
if s == "ERR max number of clients reached" {
return true
}
if strings.HasPrefix(s, "LOADING ") {
return true
}
if strings.HasPrefix(s, "READONLY ") {
return true
}
if strings.HasPrefix(s, "CLUSTERDOWN ") {
return true
}
return false
}
func IsRedisError(err error) bool {
_, ok := err.(proto.RedisError)
return ok
}
func IsBadConn(err error, allowTimeout bool) bool {
if err == nil {
return false
}
if IsRedisError(err) {
// #790
return IsReadOnlyError(err)
}
if allowTimeout {
if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
return false
}
}
return true
}
func IsMovedError(err error) (moved bool, ask bool, addr string) {
if !IsRedisError(err) {
return
}
s := err.Error()
if strings.HasPrefix(s, "MOVED ") {
moved = true
} else if strings.HasPrefix(s, "ASK ") {
ask = true
} else {
return
}
ind := strings.LastIndex(s, " ")
if ind == -1 {
return false, false, ""
}
addr = s[ind+1:]
return
}
func IsLoadingError(err error) bool {
return strings.HasPrefix(err.Error(), "LOADING ")
}
func IsReadOnlyError(err error) bool {
return strings.HasPrefix(err.Error(), "READONLY ")
}

View File

@ -1,15 +0,0 @@
package internal
import (
"fmt"
"log"
)
var Logger *log.Logger
func Logf(s string, args ...interface{}) {
if Logger == nil {
return
}
Logger.Output(2, fmt.Sprintf(s, args...))
}

View File

@ -1,93 +0,0 @@
package pool
import (
"net"
"sync/atomic"
"time"
"github.com/go-redis/redis/internal/proto"
)
var noDeadline = time.Time{}
type Conn struct {
netConn net.Conn
rd *proto.Reader
rdLocked bool
wr *proto.Writer
InitedAt time.Time
pooled bool
usedAt atomic.Value
}
func NewConn(netConn net.Conn) *Conn {
cn := &Conn{
netConn: netConn,
}
cn.rd = proto.NewReader(netConn)
cn.wr = proto.NewWriter(netConn)
cn.SetUsedAt(time.Now())
return cn
}
func (cn *Conn) UsedAt() time.Time {
return cn.usedAt.Load().(time.Time)
}
func (cn *Conn) SetUsedAt(tm time.Time) {
cn.usedAt.Store(tm)
}
func (cn *Conn) SetNetConn(netConn net.Conn) {
cn.netConn = netConn
cn.rd.Reset(netConn)
cn.wr.Reset(netConn)
}
func (cn *Conn) setReadTimeout(timeout time.Duration) error {
now := time.Now()
cn.SetUsedAt(now)
if timeout > 0 {
return cn.netConn.SetReadDeadline(now.Add(timeout))
}
return cn.netConn.SetReadDeadline(noDeadline)
}
func (cn *Conn) setWriteTimeout(timeout time.Duration) error {
now := time.Now()
cn.SetUsedAt(now)
if timeout > 0 {
return cn.netConn.SetWriteDeadline(now.Add(timeout))
}
return cn.netConn.SetWriteDeadline(noDeadline)
}
func (cn *Conn) Write(b []byte) (int, error) {
return cn.netConn.Write(b)
}
func (cn *Conn) RemoteAddr() net.Addr {
return cn.netConn.RemoteAddr()
}
func (cn *Conn) WithReader(timeout time.Duration, fn func(rd *proto.Reader) error) error {
_ = cn.setReadTimeout(timeout)
return fn(cn.rd)
}
func (cn *Conn) WithWriter(timeout time.Duration, fn func(wr *proto.Writer) error) error {
_ = cn.setWriteTimeout(timeout)
firstErr := fn(cn.wr)
err := cn.wr.Flush()
if err != nil && firstErr == nil {
firstErr = err
}
return firstErr
}
func (cn *Conn) Close() error {
return cn.netConn.Close()
}

View File

@ -1,53 +0,0 @@
package pool
type SingleConnPool struct {
cn *Conn
}
var _ Pooler = (*SingleConnPool)(nil)
func NewSingleConnPool(cn *Conn) *SingleConnPool {
return &SingleConnPool{
cn: cn,
}
}
func (p *SingleConnPool) NewConn() (*Conn, error) {
panic("not implemented")
}
func (p *SingleConnPool) CloseConn(*Conn) error {
panic("not implemented")
}
func (p *SingleConnPool) Get() (*Conn, error) {
return p.cn, nil
}
func (p *SingleConnPool) Put(cn *Conn) {
if p.cn != cn {
panic("p.cn != cn")
}
}
func (p *SingleConnPool) Remove(cn *Conn) {
if p.cn != cn {
panic("p.cn != cn")
}
}
func (p *SingleConnPool) Len() int {
return 1
}
func (p *SingleConnPool) IdleLen() int {
return 0
}
func (p *SingleConnPool) Stats() *Stats {
return nil
}
func (p *SingleConnPool) Close() error {
return nil
}

View File

@ -1,29 +0,0 @@
package internal
import "github.com/go-redis/redis/internal/util"
func ToLower(s string) string {
if isLower(s) {
return s
}
b := make([]byte, len(s))
for i := range b {
c := s[i]
if c >= 'A' && c <= 'Z' {
c += 'a' - 'A'
}
b[i] = c
}
return util.BytesToString(b)
}
func isLower(s string) bool {
for i := 0; i < len(s); i++ {
c := s[i]
if c >= 'A' && c <= 'Z' {
return false
}
}
return true
}

View File

@ -1,580 +0,0 @@
package redis
import (
"context"
"fmt"
"log"
"os"
"time"
"github.com/go-redis/redis/internal"
"github.com/go-redis/redis/internal/pool"
"github.com/go-redis/redis/internal/proto"
)
// Nil reply Redis returns when key does not exist.
const Nil = proto.Nil
func init() {
SetLogger(log.New(os.Stderr, "redis: ", log.LstdFlags|log.Lshortfile))
}
func SetLogger(logger *log.Logger) {
internal.Logger = logger
}
type baseClient struct {
opt *Options
connPool pool.Pooler
limiter Limiter
process func(Cmder) error
processPipeline func([]Cmder) error
processTxPipeline func([]Cmder) error
onClose func() error // hook called when client is closed
}
func (c *baseClient) init() {
c.process = c.defaultProcess
c.processPipeline = c.defaultProcessPipeline
c.processTxPipeline = c.defaultProcessTxPipeline
}
func (c *baseClient) String() string {
return fmt.Sprintf("Redis<%s db:%d>", c.getAddr(), c.opt.DB)
}
func (c *baseClient) newConn() (*pool.Conn, error) {
cn, err := c.connPool.NewConn()
if err != nil {
return nil, err
}
if cn.InitedAt.IsZero() {
if err := c.initConn(cn); err != nil {
_ = c.connPool.CloseConn(cn)
return nil, err
}
}
return cn, nil
}
func (c *baseClient) getConn() (*pool.Conn, error) {
if c.limiter != nil {
err := c.limiter.Allow()
if err != nil {
return nil, err
}
}
cn, err := c._getConn()
if err != nil {
if c.limiter != nil {
c.limiter.ReportResult(err)
}
return nil, err
}
return cn, nil
}
func (c *baseClient) _getConn() (*pool.Conn, error) {
cn, err := c.connPool.Get()
if err != nil {
return nil, err
}
if cn.InitedAt.IsZero() {
err := c.initConn(cn)
if err != nil {
c.connPool.Remove(cn)
return nil, err
}
}
return cn, nil
}
func (c *baseClient) releaseConn(cn *pool.Conn, err error) {
if c.limiter != nil {
c.limiter.ReportResult(err)
}
if internal.IsBadConn(err, false) {
c.connPool.Remove(cn)
} else {
c.connPool.Put(cn)
}
}
func (c *baseClient) releaseConnStrict(cn *pool.Conn, err error) {
if c.limiter != nil {
c.limiter.ReportResult(err)
}
if err == nil || internal.IsRedisError(err) {
c.connPool.Put(cn)
} else {
c.connPool.Remove(cn)
}
}
func (c *baseClient) initConn(cn *pool.Conn) error {
cn.InitedAt = time.Now()
if c.opt.Password == "" &&
c.opt.DB == 0 &&
!c.opt.readOnly &&
c.opt.OnConnect == nil {
return nil
}
conn := newConn(c.opt, cn)
_, err := conn.Pipelined(func(pipe Pipeliner) error {
if c.opt.Password != "" {
pipe.Auth(c.opt.Password)
}
if c.opt.DB > 0 {
pipe.Select(c.opt.DB)
}
if c.opt.readOnly {
pipe.ReadOnly()
}
return nil
})
if err != nil {
return err
}
if c.opt.OnConnect != nil {
return c.opt.OnConnect(conn)
}
return nil
}
// Do creates a Cmd from the args and processes the cmd.
func (c *baseClient) Do(args ...interface{}) *Cmd {
cmd := NewCmd(args...)
_ = c.Process(cmd)
return cmd
}
// WrapProcess wraps function that processes Redis commands.
func (c *baseClient) WrapProcess(
fn func(oldProcess func(cmd Cmder) error) func(cmd Cmder) error,
) {
c.process = fn(c.process)
}
func (c *baseClient) Process(cmd Cmder) error {
return c.process(cmd)
}
func (c *baseClient) defaultProcess(cmd Cmder) error {
for attempt := 0; attempt <= c.opt.MaxRetries; attempt++ {
if attempt > 0 {
time.Sleep(c.retryBackoff(attempt))
}
cn, err := c.getConn()
if err != nil {
cmd.setErr(err)
if internal.IsRetryableError(err, true) {
continue
}
return err
}
err = cn.WithWriter(c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmd(wr, cmd)
})
if err != nil {
c.releaseConn(cn, err)
cmd.setErr(err)
if internal.IsRetryableError(err, true) {
continue
}
return err
}
err = cn.WithReader(c.cmdTimeout(cmd), func(rd *proto.Reader) error {
return cmd.readReply(rd)
})
c.releaseConn(cn, err)
if err != nil && internal.IsRetryableError(err, cmd.readTimeout() == nil) {
continue
}
return err
}
return cmd.Err()
}
func (c *baseClient) retryBackoff(attempt int) time.Duration {
return internal.RetryBackoff(attempt, c.opt.MinRetryBackoff, c.opt.MaxRetryBackoff)
}
func (c *baseClient) cmdTimeout(cmd Cmder) time.Duration {
if timeout := cmd.readTimeout(); timeout != nil {
t := *timeout
if t == 0 {
return 0
}
return t + 10*time.Second
}
return c.opt.ReadTimeout
}
// Close closes the client, releasing any open resources.
//
// It is rare to Close a Client, as the Client is meant to be
// long-lived and shared between many goroutines.
func (c *baseClient) Close() error {
var firstErr error
if c.onClose != nil {
if err := c.onClose(); err != nil && firstErr == nil {
firstErr = err
}
}
if err := c.connPool.Close(); err != nil && firstErr == nil {
firstErr = err
}
return firstErr
}
func (c *baseClient) getAddr() string {
return c.opt.Addr
}
func (c *baseClient) WrapProcessPipeline(
fn func(oldProcess func([]Cmder) error) func([]Cmder) error,
) {
c.processPipeline = fn(c.processPipeline)
c.processTxPipeline = fn(c.processTxPipeline)
}
func (c *baseClient) defaultProcessPipeline(cmds []Cmder) error {
return c.generalProcessPipeline(cmds, c.pipelineProcessCmds)
}
func (c *baseClient) defaultProcessTxPipeline(cmds []Cmder) error {
return c.generalProcessPipeline(cmds, c.txPipelineProcessCmds)
}
type pipelineProcessor func(*pool.Conn, []Cmder) (bool, error)
func (c *baseClient) generalProcessPipeline(cmds []Cmder, p pipelineProcessor) error {
for attempt := 0; attempt <= c.opt.MaxRetries; attempt++ {
if attempt > 0 {
time.Sleep(c.retryBackoff(attempt))
}
cn, err := c.getConn()
if err != nil {
setCmdsErr(cmds, err)
return err
}
canRetry, err := p(cn, cmds)
c.releaseConnStrict(cn, err)
if !canRetry || !internal.IsRetryableError(err, true) {
break
}
}
return cmdsFirstErr(cmds)
}
func (c *baseClient) pipelineProcessCmds(cn *pool.Conn, cmds []Cmder) (bool, error) {
err := cn.WithWriter(c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmd(wr, cmds...)
})
if err != nil {
setCmdsErr(cmds, err)
return true, err
}
err = cn.WithReader(c.opt.ReadTimeout, func(rd *proto.Reader) error {
return pipelineReadCmds(rd, cmds)
})
return true, err
}
func pipelineReadCmds(rd *proto.Reader, cmds []Cmder) error {
for _, cmd := range cmds {
err := cmd.readReply(rd)
if err != nil && !internal.IsRedisError(err) {
return err
}
}
return nil
}
func (c *baseClient) txPipelineProcessCmds(cn *pool.Conn, cmds []Cmder) (bool, error) {
err := cn.WithWriter(c.opt.WriteTimeout, func(wr *proto.Writer) error {
return txPipelineWriteMulti(wr, cmds)
})
if err != nil {
setCmdsErr(cmds, err)
return true, err
}
err = cn.WithReader(c.opt.ReadTimeout, func(rd *proto.Reader) error {
err := txPipelineReadQueued(rd, cmds)
if err != nil {
setCmdsErr(cmds, err)
return err
}
return pipelineReadCmds(rd, cmds)
})
return false, err
}
func txPipelineWriteMulti(wr *proto.Writer, cmds []Cmder) error {
multiExec := make([]Cmder, 0, len(cmds)+2)
multiExec = append(multiExec, NewStatusCmd("MULTI"))
multiExec = append(multiExec, cmds...)
multiExec = append(multiExec, NewSliceCmd("EXEC"))
return writeCmd(wr, multiExec...)
}
func txPipelineReadQueued(rd *proto.Reader, cmds []Cmder) error {
// Parse queued replies.
var statusCmd StatusCmd
err := statusCmd.readReply(rd)
if err != nil {
return err
}
for range cmds {
err = statusCmd.readReply(rd)
if err != nil && !internal.IsRedisError(err) {
return err
}
}
// Parse number of replies.
line, err := rd.ReadLine()
if err != nil {
if err == Nil {
err = TxFailedErr
}
return err
}
switch line[0] {
case proto.ErrorReply:
return proto.ParseErrorReply(line)
case proto.ArrayReply:
// ok
default:
err := fmt.Errorf("redis: expected '*', but got line %q", line)
return err
}
return nil
}
//------------------------------------------------------------------------------
// Client is a Redis client representing a pool of zero or more
// underlying connections. It's safe for concurrent use by multiple
// goroutines.
type Client struct {
baseClient
cmdable
ctx context.Context
}
// NewClient returns a client to the Redis Server specified by Options.
func NewClient(opt *Options) *Client {
opt.init()
c := Client{
baseClient: baseClient{
opt: opt,
connPool: newConnPool(opt),
},
}
c.baseClient.init()
c.init()
return &c
}
func (c *Client) init() {
c.cmdable.setProcessor(c.Process)
}
func (c *Client) Context() context.Context {
if c.ctx != nil {
return c.ctx
}
return context.Background()
}
func (c *Client) WithContext(ctx context.Context) *Client {
if ctx == nil {
panic("nil context")
}
c2 := c.clone()
c2.ctx = ctx
return c2
}
func (c *Client) clone() *Client {
cp := *c
cp.init()
return &cp
}
// Options returns read-only Options that were used to create the client.
func (c *Client) Options() *Options {
return c.opt
}
func (c *Client) SetLimiter(l Limiter) *Client {
c.limiter = l
return c
}
type PoolStats pool.Stats
// PoolStats returns connection pool stats.
func (c *Client) PoolStats() *PoolStats {
stats := c.connPool.Stats()
return (*PoolStats)(stats)
}
func (c *Client) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.Pipeline().Pipelined(fn)
}
func (c *Client) Pipeline() Pipeliner {
pipe := Pipeline{
exec: c.processPipeline,
}
pipe.statefulCmdable.setProcessor(pipe.Process)
return &pipe
}
func (c *Client) TxPipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.TxPipeline().Pipelined(fn)
}
// TxPipeline acts like Pipeline, but wraps queued commands with MULTI/EXEC.
func (c *Client) TxPipeline() Pipeliner {
pipe := Pipeline{
exec: c.processTxPipeline,
}
pipe.statefulCmdable.setProcessor(pipe.Process)
return &pipe
}
func (c *Client) pubSub() *PubSub {
pubsub := &PubSub{
opt: c.opt,
newConn: func(channels []string) (*pool.Conn, error) {
return c.newConn()
},
closeConn: c.connPool.CloseConn,
}
pubsub.init()
return pubsub
}
// Subscribe subscribes the client to the specified channels.
// Channels can be omitted to create empty subscription.
// Note that this method does not wait on a response from Redis, so the
// subscription may not be active immediately. To force the connection to wait,
// you may call the Receive() method on the returned *PubSub like so:
//
// sub := client.Subscribe(queryResp)
// iface, err := sub.Receive()
// if err != nil {
// // handle error
// }
//
// // Should be *Subscription, but others are possible if other actions have been
// // taken on sub since it was created.
// switch iface.(type) {
// case *Subscription:
// // subscribe succeeded
// case *Message:
// // received first message
// case *Pong:
// // pong received
// default:
// // handle error
// }
//
// ch := sub.Channel()
func (c *Client) Subscribe(channels ...string) *PubSub {
pubsub := c.pubSub()
if len(channels) > 0 {
_ = pubsub.Subscribe(channels...)
}
return pubsub
}
// PSubscribe subscribes the client to the given patterns.
// Patterns can be omitted to create empty subscription.
func (c *Client) PSubscribe(channels ...string) *PubSub {
pubsub := c.pubSub()
if len(channels) > 0 {
_ = pubsub.PSubscribe(channels...)
}
return pubsub
}
//------------------------------------------------------------------------------
// Conn is like Client, but its pool contains single connection.
type Conn struct {
baseClient
statefulCmdable
}
func newConn(opt *Options, cn *pool.Conn) *Conn {
c := Conn{
baseClient: baseClient{
opt: opt,
connPool: pool.NewSingleConnPool(cn),
},
}
c.baseClient.init()
c.statefulCmdable.setProcessor(c.Process)
return &c
}
func (c *Conn) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.Pipeline().Pipelined(fn)
}
func (c *Conn) Pipeline() Pipeliner {
pipe := Pipeline{
exec: c.processPipeline,
}
pipe.statefulCmdable.setProcessor(pipe.Process)
return &pipe
}
func (c *Conn) TxPipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.TxPipeline().Pipelined(fn)
}
// TxPipeline acts like Pipeline, but wraps queued commands with MULTI/EXEC.
func (c *Conn) TxPipeline() Pipeliner {
pipe := Pipeline{
exec: c.processTxPipeline,
}
pipe.statefulCmdable.setProcessor(pipe.Process)
return &pipe
}

15
vendor/github.com/go-redis/redis/v7/.golangci.yml generated vendored Normal file
View File

@ -0,0 +1,15 @@
run:
concurrency: 8
deadline: 5m
tests: false
linters:
enable-all: true
disable:
- funlen
- gochecknoglobals
- gocognit
- goconst
- godox
- gosec
- maligned
- wsl

22
vendor/github.com/go-redis/redis/v7/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,22 @@
dist: xenial
language: go
services:
- redis-server
go:
- 1.12.x
- 1.13.x
- tip
matrix:
allow_failures:
- go: tip
env:
- GO111MODULE=on
go_import_path: github.com/go-redis/redis
before_install:
- curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go env GOPATH)/bin v1.21.0

46
vendor/github.com/go-redis/redis/v7/CHANGELOG.md generated vendored Normal file
View File

@ -0,0 +1,46 @@
# Changelog
## v7.2
- Existing `HMSet` is renamed to `HSet` and old deprecated `HMSet` is restored for Redis 3 users.
## v7.1
- Existing `Cmd.String` is renamed to `Cmd.Text`. New `Cmd.String` implements `fmt.Stringer` interface.
## v7
- *Important*. Tx.Pipeline now returns a non-transactional pipeline. Use Tx.TxPipeline for a transactional pipeline.
- WrapProcess is replaced with more convenient AddHook that has access to context.Context.
- WithContext now can not be used to create a shallow copy of the client.
- New methods ProcessContext, DoContext, and ExecContext.
- Client respects Context.Deadline when setting net.Conn deadline.
- Client listens on Context.Done while waiting for a connection from the pool and returns an error when context context is cancelled.
- Add PubSub.ChannelWithSubscriptions that sends `*Subscription` in addition to `*Message` to allow detecting reconnections.
- `time.Time` is now marshalled in RFC3339 format. `rdb.Get("foo").Time()` helper is added to parse the time.
- `SetLimiter` is removed and added `Options.Limiter` instead.
- `HMSet` is deprecated as of Redis v4.
## v6.15
- Cluster and Ring pipelines process commands for each node in its own goroutine.
## 6.14
- Added Options.MinIdleConns.
- Added Options.MaxConnAge.
- PoolStats.FreeConns is renamed to PoolStats.IdleConns.
- Add Client.Do to simplify creating custom commands.
- Add Cmd.String, Cmd.Int, Cmd.Int64, Cmd.Uint64, Cmd.Float64, and Cmd.Bool helpers.
- Lower memory usage.
## v6.13
- Ring got new options called `HashReplicas` and `Hash`. It is recommended to set `HashReplicas = 1000` for better keys distribution between shards.
- Cluster client was optimized to use much less memory when reloading cluster state.
- PubSub.ReceiveMessage is re-worked to not use ReceiveTimeout so it does not lose data when timeout occurres. In most cases it is recommended to use PubSub.Channel instead.
- Dialer.KeepAlive is set to 5 minutes by default.
## v6.12
- ClusterClient got new option called `ClusterSlots` which allows to build cluster of normal Redis Servers that don't have cluster mode enabled. See https://godoc.org/github.com/go-redis/redis#example-NewClusterClient--ManualSetup

View File

@ -1,10 +1,9 @@
all: testdeps
go test ./...
go test ./... -short -race
go test ./... -run=NONE -bench=. -benchmem
env GOOS=linux GOARCH=386 go test ./...
go vet
go get github.com/gordonklaus/ineffassign
ineffassign .
golangci-lint run
testdeps: testdata/redis/src/redis-server
@ -15,8 +14,7 @@ bench: testdeps
testdata/redis:
mkdir -p $@
wget -qO- https://github.com/antirez/redis/archive/5.0.tar.gz | tar xvz --strip-components=1 -C $@
wget -qO- http://download.redis.io/redis-stable.tar.gz | tar xvz --strip-components=1 -C $@
testdata/redis/src/redis-server: testdata/redis
sed -i.bak 's/libjemalloc.a/libjemalloc.a -lrt/g' $</src/Makefile
cd $< && make all

View File

@ -9,7 +9,7 @@ Supports:
- Redis 3 commands except QUIT, MONITOR, SLOWLOG and SYNC.
- Automatic connection pooling with [circuit breaker](https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern) support.
- [Pub/Sub](https://godoc.org/github.com/go-redis/redis#PubSub).
- [Transactions](https://godoc.org/github.com/go-redis/redis#Multi).
- [Transactions](https://godoc.org/github.com/go-redis/redis#example-Client-TxPipeline).
- [Pipeline](https://godoc.org/github.com/go-redis/redis#example-Client-Pipeline) and [TxPipeline](https://godoc.org/github.com/go-redis/redis#example-Client-TxPipeline).
- [Scripting](https://godoc.org/github.com/go-redis/redis#Script).
- [Timeouts](https://godoc.org/github.com/go-redis/redis#Options).
@ -20,28 +20,29 @@ Supports:
- [Instrumentation](https://godoc.org/github.com/go-redis/redis#ex-package--Instrumentation).
- [Cache friendly](https://github.com/go-redis/cache).
- [Rate limiting](https://github.com/go-redis/redis_rate).
- [Distributed Locks](https://github.com/bsm/redis-lock).
- [Distributed Locks](https://github.com/bsm/redislock).
API docs: https://godoc.org/github.com/go-redis/redis.
Examples: https://godoc.org/github.com/go-redis/redis#pkg-examples.
## Installation
Install:
go-redis requires a Go version with [Modules](https://github.com/golang/go/wiki/Modules) support and uses import versioning. So please make sure to initialize a Go module before installing go-redis:
```shell
go get -u github.com/go-redis/redis
``` shell
go mod init github.com/my/repo
go get github.com/go-redis/redis/v7
```
Import:
```go
import "github.com/go-redis/redis"
``` go
import "github.com/go-redis/redis/v7"
```
## Quickstart
```go
``` go
func ExampleNewClient() {
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
@ -55,6 +56,11 @@ func ExampleNewClient() {
}
func ExampleClient() {
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "", // no password set
DB: 0, // use default DB
})
err := client.Set("key", "value", 0).Err()
if err != nil {
panic(err)
@ -87,15 +93,15 @@ Please go through [examples](https://godoc.org/github.com/go-redis/redis#pkg-exa
Some corner cases:
```go
``` go
// SET key value EX 10 NX
set, err := client.SetNX("key", "value", 10*time.Second).Result()
// SORT list LIMIT 0 2 ASC
vals, err := client.Sort("list", redis.Sort{Offset: 0, Count: 2, Order: "ASC"}).Result()
vals, err := client.Sort("list", &redis.Sort{Offset: 0, Count: 2, Order: "ASC"}).Result()
// ZRANGEBYSCORE zset -inf +inf WITHSCORES LIMIT 0 2
vals, err := client.ZRangeByScoreWithScores("zset", redis.ZRangeBy{
vals, err := client.ZRangeByScoreWithScores("zset", &redis.ZRangeBy{
Min: "-inf",
Max: "+inf",
Offset: 0,
@ -103,44 +109,20 @@ vals, err := client.ZRangeByScoreWithScores("zset", redis.ZRangeBy{
}).Result()
// ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 AGGREGATE SUM
vals, err := client.ZInterStore("out", redis.ZStore{Weights: []int64{2, 3}}, "zset1", "zset2").Result()
vals, err := client.ZInterStore("out", &redis.ZStore{
Keys: []string{"zset1", "zset2"},
Weights: []int64{2, 3}
}).Result()
// EVAL "return {KEYS[1],ARGV[1]}" 1 "key" "hello"
vals, err := client.Eval("return {KEYS[1],ARGV[1]}", []string{"key"}, "hello").Result()
```
## Benchmark
go-redis vs redigo:
```
BenchmarkSetGoRedis10Conns64Bytes-4 200000 7621 ns/op 210 B/op 6 allocs/op
BenchmarkSetGoRedis100Conns64Bytes-4 200000 7554 ns/op 210 B/op 6 allocs/op
BenchmarkSetGoRedis10Conns1KB-4 200000 7697 ns/op 210 B/op 6 allocs/op
BenchmarkSetGoRedis100Conns1KB-4 200000 7688 ns/op 210 B/op 6 allocs/op
BenchmarkSetGoRedis10Conns10KB-4 200000 9214 ns/op 210 B/op 6 allocs/op
BenchmarkSetGoRedis100Conns10KB-4 200000 9181 ns/op 210 B/op 6 allocs/op
BenchmarkSetGoRedis10Conns1MB-4 2000 583242 ns/op 2337 B/op 6 allocs/op
BenchmarkSetGoRedis100Conns1MB-4 2000 583089 ns/op 2338 B/op 6 allocs/op
BenchmarkSetRedigo10Conns64Bytes-4 200000 7576 ns/op 208 B/op 7 allocs/op
BenchmarkSetRedigo100Conns64Bytes-4 200000 7782 ns/op 208 B/op 7 allocs/op
BenchmarkSetRedigo10Conns1KB-4 200000 7958 ns/op 208 B/op 7 allocs/op
BenchmarkSetRedigo100Conns1KB-4 200000 7725 ns/op 208 B/op 7 allocs/op
BenchmarkSetRedigo10Conns10KB-4 100000 18442 ns/op 208 B/op 7 allocs/op
BenchmarkSetRedigo100Conns10KB-4 100000 18818 ns/op 208 B/op 7 allocs/op
BenchmarkSetRedigo10Conns1MB-4 2000 668829 ns/op 226 B/op 7 allocs/op
BenchmarkSetRedigo100Conns1MB-4 2000 679542 ns/op 226 B/op 7 allocs/op
```
Redis Cluster:
```
BenchmarkRedisPing-4 200000 6983 ns/op 116 B/op 4 allocs/op
BenchmarkRedisClusterPing-4 100000 11535 ns/op 117 B/op 4 allocs/op
// custom command
res, err := client.Do("set", "key", "value").Result()
```
## See also
- [Golang PostgreSQL ORM](https://github.com/go-pg/pg)
- [Golang msgpack](https://github.com/vmihailenco/msgpack)
- [Golang message task queue](https://github.com/go-msgqueue/msgqueue)
- [Golang message task queue](https://github.com/vmihailenco/taskq)

View File

@ -13,10 +13,10 @@ import (
"sync/atomic"
"time"
"github.com/go-redis/redis/internal"
"github.com/go-redis/redis/internal/hashtag"
"github.com/go-redis/redis/internal/pool"
"github.com/go-redis/redis/internal/proto"
"github.com/go-redis/redis/v7/internal"
"github.com/go-redis/redis/v7/internal/hashtag"
"github.com/go-redis/redis/v7/internal/pool"
"github.com/go-redis/redis/v7/internal/proto"
)
var errClusterNoNodes = fmt.Errorf("redis: cluster has no nodes")
@ -53,8 +53,11 @@ type ClusterOptions struct {
// Following options are copied from Options struct.
Dialer func(ctx context.Context, network, addr string) (net.Conn, error)
OnConnect func(*Conn) error
Username string
Password string
MaxRetries int
@ -65,6 +68,9 @@ type ClusterOptions struct {
ReadTimeout time.Duration
WriteTimeout time.Duration
// NewClient creates a cluster node client with provided name and options.
NewClient func(opt *Options) *Client
// PoolSize applies per cluster node and not for the whole cluster.
PoolSize int
MinIdleConns int
@ -116,17 +122,23 @@ func (opt *ClusterOptions) init() {
case 0:
opt.MaxRetryBackoff = 512 * time.Millisecond
}
if opt.NewClient == nil {
opt.NewClient = NewClient
}
}
func (opt *ClusterOptions) clientOptions() *Options {
const disableIdleCheck = -1
return &Options{
Dialer: opt.Dialer,
OnConnect: opt.OnConnect,
MaxRetries: opt.MaxRetries,
MinRetryBackoff: opt.MinRetryBackoff,
MaxRetryBackoff: opt.MaxRetryBackoff,
Username: opt.Username,
Password: opt.Password,
readOnly: opt.ReadOnly,
@ -152,14 +164,14 @@ type clusterNode struct {
latency uint32 // atomic
generation uint32 // atomic
loading uint32 // atomic
failing uint32 // atomic
}
func newClusterNode(clOpt *ClusterOptions, addr string) *clusterNode {
opt := clOpt.clientOptions()
opt.Addr = addr
node := clusterNode{
Client: NewClient(opt),
Client: clOpt.NewClient(opt),
}
node.latency = math.MaxUint32
@ -200,21 +212,21 @@ func (n *clusterNode) Latency() time.Duration {
return time.Duration(latency) * time.Microsecond
}
func (n *clusterNode) MarkAsLoading() {
atomic.StoreUint32(&n.loading, uint32(time.Now().Unix()))
func (n *clusterNode) MarkAsFailing() {
atomic.StoreUint32(&n.failing, uint32(time.Now().Unix()))
}
func (n *clusterNode) Loading() bool {
const minute = int64(time.Minute / time.Second)
func (n *clusterNode) Failing() bool {
const timeout = 15 // 15 seconds
loading := atomic.LoadUint32(&n.loading)
if loading == 0 {
failing := atomic.LoadUint32(&n.failing)
if failing == 0 {
return false
}
if time.Now().Unix()-int64(loading) < minute {
if time.Now().Unix()-int64(failing) < timeout {
return true
}
atomic.StoreUint32(&n.loading, 0)
atomic.StoreUint32(&n.failing, 0)
return false
}
@ -304,6 +316,7 @@ func (c *clusterNodes) NextGeneration() uint32 {
// GC removes unused nodes.
func (c *clusterNodes) GC(generation uint32) {
//nolint:prealloc
var collected []*clusterNode
c.mu.Lock()
for addr, node := range c.allNodes {
@ -323,20 +336,7 @@ func (c *clusterNodes) GC(generation uint32) {
}
func (c *clusterNodes) Get(addr string) (*clusterNode, error) {
var node *clusterNode
var err error
c.mu.RLock()
if c.closed {
err = pool.ErrClosed
} else {
node = c.allNodes[addr]
}
c.mu.RUnlock()
return node, err
}
func (c *clusterNodes) GetOrCreate(addr string) (*clusterNode, error) {
node, err := c.Get(addr)
node, err := c.get(addr)
if err != nil {
return nil, err
}
@ -365,6 +365,19 @@ func (c *clusterNodes) GetOrCreate(addr string) (*clusterNode, error) {
return node, err
}
func (c *clusterNodes) get(addr string) (*clusterNode, error) {
var node *clusterNode
var err error
c.mu.RLock()
if c.closed {
err = pool.ErrClosed
} else {
node = c.allNodes[addr]
}
c.mu.RUnlock()
return node, err
}
func (c *clusterNodes) All() ([]*clusterNode, error) {
c.mu.RLock()
defer c.mu.RUnlock()
@ -387,7 +400,7 @@ func (c *clusterNodes) Random() (*clusterNode, error) {
}
n := rand.Intn(len(addrs))
return c.GetOrCreate(addrs[n])
return c.Get(addrs[n])
}
//------------------------------------------------------------------------------
@ -445,7 +458,7 @@ func newClusterState(
addr = replaceLoopbackHost(addr, originHost)
}
node, err := c.nodes.GetOrCreate(addr)
node, err := c.nodes.Get(addr)
if err != nil {
return nil, err
}
@ -519,7 +532,7 @@ func (c *clusterState) slotSlaveNode(slot int) (*clusterNode, error) {
case 1:
return nodes[0], nil
case 2:
if slave := nodes[1]; !slave.Loading() {
if slave := nodes[1]; !slave.Failing() {
return slave, nil
}
return nodes[0], nil
@ -528,7 +541,7 @@ func (c *clusterState) slotSlaveNode(slot int) (*clusterNode, error) {
for i := 0; i < 10; i++ {
n := rand.Intn(len(nodes)-1) + 1
slave = nodes[n]
if !slave.Loading() {
if !slave.Failing() {
return slave, nil
}
}
@ -548,7 +561,7 @@ func (c *clusterState) slotClosestNode(slot int) (*clusterNode, error) {
var node *clusterNode
for _, n := range nodes {
if n.Loading() {
if n.Failing() {
continue
}
if node == nil || node.Latency()-n.Latency() > threshold {
@ -558,10 +571,13 @@ func (c *clusterState) slotClosestNode(slot int) (*clusterNode, error) {
return node, nil
}
func (c *clusterState) slotRandomNode(slot int) *clusterNode {
func (c *clusterState) slotRandomNode(slot int) (*clusterNode, error) {
nodes := c.slotNodes(slot)
if len(nodes) == 0 {
return c.nodes.Random()
}
n := rand.Intn(len(nodes))
return nodes[n]
return nodes[n], nil
}
func (c *clusterState) slotNodes(slot int) []*clusterNode {
@ -639,22 +655,21 @@ func (c *clusterStateHolder) ReloadOrGet() (*clusterState, error) {
//------------------------------------------------------------------------------
type clusterClient struct {
opt *ClusterOptions
nodes *clusterNodes
state *clusterStateHolder //nolint:structcheck
cmdsInfoCache *cmdsInfoCache //nolint:structcheck
}
// ClusterClient is a Redis Cluster client representing a pool of zero
// or more underlying connections. It's safe for concurrent use by
// multiple goroutines.
type ClusterClient struct {
*clusterClient
cmdable
hooks
ctx context.Context
opt *ClusterOptions
nodes *clusterNodes
state *clusterStateHolder
cmdsInfoCache *cmdsInfoCache
process func(Cmder) error
processPipeline func([]Cmder) error
processTxPipeline func([]Cmder) error
}
// NewClusterClient returns a Redis Cluster client as described in
@ -663,17 +678,16 @@ func NewClusterClient(opt *ClusterOptions) *ClusterClient {
opt.init()
c := &ClusterClient{
opt: opt,
nodes: newClusterNodes(opt),
clusterClient: &clusterClient{
opt: opt,
nodes: newClusterNodes(opt),
},
ctx: context.Background(),
}
c.state = newClusterStateHolder(c.loadState)
c.cmdsInfoCache = newCmdsInfoCache(c.cmdsInfo)
c.cmdable = c.Process
c.process = c.defaultProcess
c.processPipeline = c.defaultProcessPipeline
c.processTxPipeline = c.defaultProcessTxPipeline
c.init()
if opt.IdleCheckFrequency > 0 {
go c.reaper(opt.IdleCheckFrequency)
}
@ -681,37 +695,19 @@ func NewClusterClient(opt *ClusterOptions) *ClusterClient {
return c
}
func (c *ClusterClient) init() {
c.cmdable.setProcessor(c.Process)
}
// ReloadState reloads cluster state. If available it calls ClusterSlots func
// to get cluster slots information.
func (c *ClusterClient) ReloadState() error {
_, err := c.state.Reload()
return err
}
func (c *ClusterClient) Context() context.Context {
if c.ctx != nil {
return c.ctx
}
return context.Background()
return c.ctx
}
func (c *ClusterClient) WithContext(ctx context.Context) *ClusterClient {
if ctx == nil {
panic("nil context")
}
c2 := c.copy()
c2.ctx = ctx
return c2
}
func (c *ClusterClient) copy() *ClusterClient {
cp := *c
cp.init()
return &cp
clone := *c
clone.cmdable = clone.Process
clone.hooks.lock()
clone.ctx = ctx
return &clone
}
// Options returns read-only Options that were used to create the client.
@ -719,164 +715,10 @@ func (c *ClusterClient) Options() *ClusterOptions {
return c.opt
}
func (c *ClusterClient) retryBackoff(attempt int) time.Duration {
return internal.RetryBackoff(attempt, c.opt.MinRetryBackoff, c.opt.MaxRetryBackoff)
}
func (c *ClusterClient) cmdsInfo() (map[string]*CommandInfo, error) {
addrs, err := c.nodes.Addrs()
if err != nil {
return nil, err
}
var firstErr error
for _, addr := range addrs {
node, err := c.nodes.Get(addr)
if err != nil {
return nil, err
}
if node == nil {
continue
}
info, err := node.Client.Command().Result()
if err == nil {
return info, nil
}
if firstErr == nil {
firstErr = err
}
}
return nil, firstErr
}
func (c *ClusterClient) cmdInfo(name string) *CommandInfo {
cmdsInfo, err := c.cmdsInfoCache.Get()
if err != nil {
return nil
}
info := cmdsInfo[name]
if info == nil {
internal.Logf("info for cmd=%s not found", name)
}
return info
}
func cmdSlot(cmd Cmder, pos int) int {
if pos == 0 {
return hashtag.RandomSlot()
}
firstKey := cmd.stringArg(pos)
return hashtag.Slot(firstKey)
}
func (c *ClusterClient) cmdSlot(cmd Cmder) int {
args := cmd.Args()
if args[0] == "cluster" && args[1] == "getkeysinslot" {
return args[2].(int)
}
cmdInfo := c.cmdInfo(cmd.Name())
return cmdSlot(cmd, cmdFirstKeyPos(cmd, cmdInfo))
}
func (c *ClusterClient) cmdSlotAndNode(cmd Cmder) (int, *clusterNode, error) {
state, err := c.state.Get()
if err != nil {
return 0, nil, err
}
cmdInfo := c.cmdInfo(cmd.Name())
slot := c.cmdSlot(cmd)
if c.opt.ReadOnly && cmdInfo != nil && cmdInfo.ReadOnly {
if c.opt.RouteByLatency {
node, err := state.slotClosestNode(slot)
return slot, node, err
}
if c.opt.RouteRandomly {
node := state.slotRandomNode(slot)
return slot, node, nil
}
node, err := state.slotSlaveNode(slot)
return slot, node, err
}
node, err := state.slotMasterNode(slot)
return slot, node, err
}
func (c *ClusterClient) slotMasterNode(slot int) (*clusterNode, error) {
state, err := c.state.Get()
if err != nil {
return nil, err
}
nodes := state.slotNodes(slot)
if len(nodes) > 0 {
return nodes[0], nil
}
return c.nodes.Random()
}
func (c *ClusterClient) Watch(fn func(*Tx) error, keys ...string) error {
if len(keys) == 0 {
return fmt.Errorf("redis: Watch requires at least one key")
}
slot := hashtag.Slot(keys[0])
for _, key := range keys[1:] {
if hashtag.Slot(key) != slot {
err := fmt.Errorf("redis: Watch requires all keys to be in the same slot")
return err
}
}
node, err := c.slotMasterNode(slot)
if err != nil {
return err
}
for attempt := 0; attempt <= c.opt.MaxRedirects; attempt++ {
if attempt > 0 {
time.Sleep(c.retryBackoff(attempt))
}
err = node.Client.Watch(fn, keys...)
if err == nil {
break
}
if err != Nil {
c.state.LazyReload()
}
moved, ask, addr := internal.IsMovedError(err)
if moved || ask {
node, err = c.nodes.GetOrCreate(addr)
if err != nil {
return err
}
continue
}
if err == pool.ErrClosed || internal.IsReadOnlyError(err) {
node, err = c.slotMasterNode(slot)
if err != nil {
return err
}
continue
}
if internal.IsRetryableError(err, true) {
continue
}
return err
}
// ReloadState reloads cluster state. If available it calls ClusterSlots func
// to get cluster slots information.
func (c *ClusterClient) ReloadState() error {
_, err := c.state.Reload()
return err
}
@ -890,99 +732,111 @@ func (c *ClusterClient) Close() error {
// Do creates a Cmd from the args and processes the cmd.
func (c *ClusterClient) Do(args ...interface{}) *Cmd {
return c.DoContext(c.ctx, args...)
}
func (c *ClusterClient) DoContext(ctx context.Context, args ...interface{}) *Cmd {
cmd := NewCmd(args...)
c.Process(cmd)
_ = c.ProcessContext(ctx, cmd)
return cmd
}
func (c *ClusterClient) WrapProcess(
fn func(oldProcess func(Cmder) error) func(Cmder) error,
) {
c.process = fn(c.process)
}
func (c *ClusterClient) Process(cmd Cmder) error {
return c.process(cmd)
return c.ProcessContext(c.ctx, cmd)
}
func (c *ClusterClient) defaultProcess(cmd Cmder) error {
func (c *ClusterClient) ProcessContext(ctx context.Context, cmd Cmder) error {
return c.hooks.process(ctx, cmd, c.process)
}
func (c *ClusterClient) process(ctx context.Context, cmd Cmder) error {
err := c._process(ctx, cmd)
if err != nil {
cmd.SetErr(err)
return err
}
return nil
}
func (c *ClusterClient) _process(ctx context.Context, cmd Cmder) error {
cmdInfo := c.cmdInfo(cmd.Name())
slot := c.cmdSlot(cmd)
var node *clusterNode
var ask bool
var lastErr error
for attempt := 0; attempt <= c.opt.MaxRedirects; attempt++ {
if attempt > 0 {
time.Sleep(c.retryBackoff(attempt))
if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
return err
}
}
if node == nil {
var err error
_, node, err = c.cmdSlotAndNode(cmd)
node, err = c.cmdNode(cmdInfo, slot)
if err != nil {
cmd.setErr(err)
break
return err
}
}
var err error
if ask {
pipe := node.Client.Pipeline()
_ = pipe.Process(NewCmd("ASKING"))
_ = pipe.Process(NewCmd("asking"))
_ = pipe.Process(cmd)
_, err = pipe.Exec()
_, lastErr = pipe.ExecContext(ctx)
_ = pipe.Close()
ask = false
} else {
err = node.Client.Process(cmd)
lastErr = node.Client.ProcessContext(ctx, cmd)
}
// If there is no error - we are done.
if err == nil {
break
if lastErr == nil {
return nil
}
if err != Nil {
if lastErr != Nil {
c.state.LazyReload()
}
if lastErr == pool.ErrClosed || isReadOnlyError(lastErr) {
node = nil
continue
}
// If slave is loading - pick another node.
if c.opt.ReadOnly && internal.IsLoadingError(err) {
node.MarkAsLoading()
if c.opt.ReadOnly && isLoadingError(lastErr) {
node.MarkAsFailing()
node = nil
continue
}
var moved bool
var addr string
moved, ask, addr = internal.IsMovedError(err)
moved, ask, addr = isMovedError(lastErr)
if moved || ask {
node, err = c.nodes.GetOrCreate(addr)
var err error
node, err = c.nodes.Get(addr)
if err != nil {
break
return err
}
continue
}
if err == pool.ErrClosed || internal.IsReadOnlyError(err) {
node = nil
continue
}
if internal.IsRetryableError(err, true) {
if isRetryableError(lastErr, cmd.readTimeout() == nil) {
// First retry the same node.
if attempt == 0 {
continue
}
// Second try random node.
node, err = c.nodes.Random()
if err != nil {
break
}
// Second try another node.
node.MarkAsFailing()
node = nil
continue
}
break
return lastErr
}
return cmd.Err()
return lastErr
}
// ForEachMaster concurrently calls the fn on each master node in the cluster.
@ -995,6 +849,7 @@ func (c *ClusterClient) ForEachMaster(fn func(client *Client) error) error {
var wg sync.WaitGroup
errCh := make(chan error, 1)
for _, master := range state.Masters {
wg.Add(1)
go func(node *clusterNode) {
@ -1008,6 +863,7 @@ func (c *ClusterClient) ForEachMaster(fn func(client *Client) error) error {
}
}(master)
}
wg.Wait()
select {
@ -1028,6 +884,7 @@ func (c *ClusterClient) ForEachSlave(fn func(client *Client) error) error {
var wg sync.WaitGroup
errCh := make(chan error, 1)
for _, slave := range state.Slaves {
wg.Add(1)
go func(node *clusterNode) {
@ -1041,6 +898,7 @@ func (c *ClusterClient) ForEachSlave(fn func(client *Client) error) error {
}
}(slave)
}
wg.Wait()
select {
@ -1061,6 +919,7 @@ func (c *ClusterClient) ForEachNode(fn func(client *Client) error) error {
var wg sync.WaitGroup
errCh := make(chan error, 1)
worker := func(node *clusterNode) {
defer wg.Done()
err := fn(node.Client)
@ -1082,6 +941,7 @@ func (c *ClusterClient) ForEachNode(fn func(client *Client) error) error {
}
wg.Wait()
select {
case err := <-errCh:
return err
@ -1140,7 +1000,7 @@ func (c *ClusterClient) loadState() (*clusterState, error) {
var firstErr error
for _, addr := range addrs {
node, err := c.nodes.GetOrCreate(addr)
node, err := c.nodes.Get(addr)
if err != nil {
if firstErr == nil {
firstErr = err
@ -1176,7 +1036,7 @@ func (c *ClusterClient) reaper(idleCheckFrequency time.Duration) {
for _, node := range nodes {
_, err := node.Client.connPool.(*pool.ConnPool).ReapStaleConns()
if err != nil {
internal.Logf("ReapStaleConns failed: %s", err)
internal.Logger.Printf("ReapStaleConns failed: %s", err)
}
}
}
@ -1184,9 +1044,10 @@ func (c *ClusterClient) reaper(idleCheckFrequency time.Duration) {
func (c *ClusterClient) Pipeline() Pipeliner {
pipe := Pipeline{
ctx: c.ctx,
exec: c.processPipeline,
}
pipe.statefulCmdable.setProcessor(pipe.Process)
pipe.init()
return &pipe
}
@ -1194,15 +1055,13 @@ func (c *ClusterClient) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.Pipeline().Pipelined(fn)
}
func (c *ClusterClient) WrapProcessPipeline(
fn func(oldProcess func([]Cmder) error) func([]Cmder) error,
) {
c.processPipeline = fn(c.processPipeline)
func (c *ClusterClient) processPipeline(ctx context.Context, cmds []Cmder) error {
return c.hooks.processPipeline(ctx, cmds, c._processPipeline)
}
func (c *ClusterClient) defaultProcessPipeline(cmds []Cmder) error {
func (c *ClusterClient) _processPipeline(ctx context.Context, cmds []Cmder) error {
cmdsMap := newCmdsMap()
err := c.mapCmdsByNode(cmds, cmdsMap)
err := c.mapCmdsByNode(cmdsMap, cmds)
if err != nil {
setCmdsErr(cmds, err)
return err
@ -1210,7 +1069,10 @@ func (c *ClusterClient) defaultProcessPipeline(cmds []Cmder) error {
for attempt := 0; attempt <= c.opt.MaxRedirects; attempt++ {
if attempt > 0 {
time.Sleep(c.retryBackoff(attempt))
if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
setCmdsErr(cmds, err)
return err
}
}
failedCmds := newCmdsMap()
@ -1221,18 +1083,17 @@ func (c *ClusterClient) defaultProcessPipeline(cmds []Cmder) error {
go func(node *clusterNode, cmds []Cmder) {
defer wg.Done()
cn, err := node.Client.getConn()
if err != nil {
if err == pool.ErrClosed {
c.mapCmdsByNode(cmds, failedCmds)
} else {
setCmdsErr(cmds, err)
}
err := c._processPipelineNode(ctx, node, cmds, failedCmds)
if err == nil {
return
}
err = c.pipelineProcessCmds(node, cn, cmds, failedCmds)
node.Client.releaseConnStrict(cn, err)
if attempt < c.opt.MaxRedirects {
if err := c.mapCmdsByNode(failedCmds, cmds); err != nil {
setCmdsErr(cmds, err)
}
} else {
setCmdsErr(cmds, err)
}
}(node, cmds)
}
@ -1246,40 +1107,31 @@ func (c *ClusterClient) defaultProcessPipeline(cmds []Cmder) error {
return cmdsFirstErr(cmds)
}
type cmdsMap struct {
mu sync.Mutex
m map[*clusterNode][]Cmder
}
func newCmdsMap() *cmdsMap {
return &cmdsMap{
m: make(map[*clusterNode][]Cmder),
}
}
func (c *ClusterClient) mapCmdsByNode(cmds []Cmder, cmdsMap *cmdsMap) error {
func (c *ClusterClient) mapCmdsByNode(cmdsMap *cmdsMap, cmds []Cmder) error {
state, err := c.state.Get()
if err != nil {
setCmdsErr(cmds, err)
return err
}
cmdsAreReadOnly := c.cmdsAreReadOnly(cmds)
for _, cmd := range cmds {
var node *clusterNode
var err error
if cmdsAreReadOnly {
_, node, err = c.cmdSlotAndNode(cmd)
} else {
if c.opt.ReadOnly && c.cmdsAreReadOnly(cmds) {
for _, cmd := range cmds {
slot := c.cmdSlot(cmd)
node, err = state.slotMasterNode(slot)
node, err := c.slotReadOnlyNode(state, slot)
if err != nil {
return err
}
cmdsMap.Add(node, cmd)
}
return nil
}
for _, cmd := range cmds {
slot := c.cmdSlot(cmd)
node, err := state.slotMasterNode(slot)
if err != nil {
return err
}
cmdsMap.mu.Lock()
cmdsMap.m[node] = append(cmdsMap.m[node], cmd)
cmdsMap.mu.Unlock()
cmdsMap.Add(node, cmd)
}
return nil
}
@ -1294,94 +1146,83 @@ func (c *ClusterClient) cmdsAreReadOnly(cmds []Cmder) bool {
return true
}
func (c *ClusterClient) pipelineProcessCmds(
node *clusterNode, cn *pool.Conn, cmds []Cmder, failedCmds *cmdsMap,
func (c *ClusterClient) _processPipelineNode(
ctx context.Context, node *clusterNode, cmds []Cmder, failedCmds *cmdsMap,
) error {
err := cn.WithWriter(c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmd(wr, cmds...)
})
if err != nil {
setCmdsErr(cmds, err)
failedCmds.mu.Lock()
failedCmds.m[node] = cmds
failedCmds.mu.Unlock()
return err
}
return node.Client.hooks.processPipeline(ctx, cmds, func(ctx context.Context, cmds []Cmder) error {
return node.Client.withConn(ctx, func(ctx context.Context, cn *pool.Conn) error {
err := cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmds(wr, cmds)
})
if err != nil {
return err
}
err = cn.WithReader(c.opt.ReadTimeout, func(rd *proto.Reader) error {
return c.pipelineReadCmds(node, rd, cmds, failedCmds)
return cn.WithReader(ctx, c.opt.ReadTimeout, func(rd *proto.Reader) error {
return c.pipelineReadCmds(node, rd, cmds, failedCmds)
})
})
})
return err
}
func (c *ClusterClient) pipelineReadCmds(
node *clusterNode, rd *proto.Reader, cmds []Cmder, failedCmds *cmdsMap,
) error {
var firstErr error
for _, cmd := range cmds {
err := cmd.readReply(rd)
if err == nil {
continue
}
if c.checkMovedErr(cmd, err, failedCmds) {
continue
}
if internal.IsRedisError(err) {
if c.opt.ReadOnly && isLoadingError(err) {
node.MarkAsFailing()
return err
}
if isRedisError(err) {
continue
}
failedCmds.mu.Lock()
failedCmds.m[node] = append(failedCmds.m[node], cmd)
failedCmds.mu.Unlock()
if firstErr == nil {
firstErr = err
}
return err
}
return firstErr
return nil
}
func (c *ClusterClient) checkMovedErr(
cmd Cmder, err error, failedCmds *cmdsMap,
) bool {
moved, ask, addr := internal.IsMovedError(err)
moved, ask, addr := isMovedError(err)
if !moved && !ask {
return false
}
node, err := c.nodes.Get(addr)
if err != nil {
return false
}
if moved {
c.state.LazyReload()
node, err := c.nodes.GetOrCreate(addr)
if err != nil {
return false
}
failedCmds.mu.Lock()
failedCmds.m[node] = append(failedCmds.m[node], cmd)
failedCmds.mu.Unlock()
failedCmds.Add(node, cmd)
return true
}
if ask {
node, err := c.nodes.GetOrCreate(addr)
if err != nil {
return false
}
failedCmds.mu.Lock()
failedCmds.m[node] = append(failedCmds.m[node], NewCmd("ASKING"), cmd)
failedCmds.mu.Unlock()
failedCmds.Add(node, NewCmd("asking"), cmd)
return true
}
return false
panic("not reached")
}
// TxPipeline acts like Pipeline, but wraps queued commands with MULTI/EXEC.
func (c *ClusterClient) TxPipeline() Pipeliner {
pipe := Pipeline{
ctx: c.ctx,
exec: c.processTxPipeline,
}
pipe.statefulCmdable.setProcessor(pipe.Process)
pipe.init()
return &pipe
}
@ -1389,9 +1230,14 @@ func (c *ClusterClient) TxPipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.TxPipeline().Pipelined(fn)
}
func (c *ClusterClient) defaultProcessTxPipeline(cmds []Cmder) error {
func (c *ClusterClient) processTxPipeline(ctx context.Context, cmds []Cmder) error {
return c.hooks.processPipeline(ctx, cmds, c._processTxPipeline)
}
func (c *ClusterClient) _processTxPipeline(ctx context.Context, cmds []Cmder) error {
state, err := c.state.Get()
if err != nil {
setCmdsErr(cmds, err)
return err
}
@ -1402,11 +1248,14 @@ func (c *ClusterClient) defaultProcessTxPipeline(cmds []Cmder) error {
setCmdsErr(cmds, err)
continue
}
cmdsMap := map[*clusterNode][]Cmder{node: cmds}
cmdsMap := map[*clusterNode][]Cmder{node: cmds}
for attempt := 0; attempt <= c.opt.MaxRedirects; attempt++ {
if attempt > 0 {
time.Sleep(c.retryBackoff(attempt))
if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
setCmdsErr(cmds, err)
return err
}
}
failedCmds := newCmdsMap()
@ -1417,18 +1266,17 @@ func (c *ClusterClient) defaultProcessTxPipeline(cmds []Cmder) error {
go func(node *clusterNode, cmds []Cmder) {
defer wg.Done()
cn, err := node.Client.getConn()
if err != nil {
if err == pool.ErrClosed {
c.mapCmdsByNode(cmds, failedCmds)
} else {
setCmdsErr(cmds, err)
}
err := c._processTxPipelineNode(ctx, node, cmds, failedCmds)
if err == nil {
return
}
err = c.txPipelineProcessCmds(node, cn, cmds, failedCmds)
node.Client.releaseConnStrict(cn, err)
if attempt < c.opt.MaxRedirects {
if err := c.mapCmdsByNode(failedCmds, cmds); err != nil {
setCmdsErr(cmds, err)
}
} else {
setCmdsErr(cmds, err)
}
}(node, cmds)
}
@ -1452,50 +1300,51 @@ func (c *ClusterClient) mapCmdsBySlot(cmds []Cmder) map[int][]Cmder {
return cmdsMap
}
func (c *ClusterClient) txPipelineProcessCmds(
node *clusterNode, cn *pool.Conn, cmds []Cmder, failedCmds *cmdsMap,
func (c *ClusterClient) _processTxPipelineNode(
ctx context.Context, node *clusterNode, cmds []Cmder, failedCmds *cmdsMap,
) error {
err := cn.WithWriter(c.opt.WriteTimeout, func(wr *proto.Writer) error {
return txPipelineWriteMulti(wr, cmds)
})
if err != nil {
setCmdsErr(cmds, err)
failedCmds.mu.Lock()
failedCmds.m[node] = cmds
failedCmds.mu.Unlock()
return err
}
return node.Client.hooks.processTxPipeline(ctx, cmds, func(ctx context.Context, cmds []Cmder) error {
return node.Client.withConn(ctx, func(ctx context.Context, cn *pool.Conn) error {
err := cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmds(wr, cmds)
})
if err != nil {
return err
}
err = cn.WithReader(c.opt.ReadTimeout, func(rd *proto.Reader) error {
err := c.txPipelineReadQueued(rd, cmds, failedCmds)
if err != nil {
setCmdsErr(cmds, err)
return err
}
return pipelineReadCmds(rd, cmds)
return cn.WithReader(ctx, c.opt.ReadTimeout, func(rd *proto.Reader) error {
statusCmd := cmds[0].(*StatusCmd)
// Trim multi and exec.
cmds = cmds[1 : len(cmds)-1]
err := c.txPipelineReadQueued(rd, statusCmd, cmds, failedCmds)
if err != nil {
moved, ask, addr := isMovedError(err)
if moved || ask {
return c.cmdsMoved(cmds, moved, ask, addr, failedCmds)
}
return err
}
return pipelineReadCmds(rd, cmds)
})
})
})
return err
}
func (c *ClusterClient) txPipelineReadQueued(
rd *proto.Reader, cmds []Cmder, failedCmds *cmdsMap,
rd *proto.Reader, statusCmd *StatusCmd, cmds []Cmder, failedCmds *cmdsMap,
) error {
// Parse queued replies.
var statusCmd StatusCmd
if err := statusCmd.readReply(rd); err != nil {
return err
}
for _, cmd := range cmds {
err := statusCmd.readReply(rd)
if err == nil {
if err == nil || c.checkMovedErr(cmd, err, failedCmds) || isRedisError(err) {
continue
}
if c.checkMovedErr(cmd, err, failedCmds) || internal.IsRedisError(err) {
continue
}
return err
}
@ -1510,23 +1359,106 @@ func (c *ClusterClient) txPipelineReadQueued(
switch line[0] {
case proto.ErrorReply:
err := proto.ParseErrorReply(line)
for _, cmd := range cmds {
if !c.checkMovedErr(cmd, err, failedCmds) {
break
}
}
return err
return proto.ParseErrorReply(line)
case proto.ArrayReply:
// ok
default:
err := fmt.Errorf("redis: expected '*', but got line %q", line)
return err
return fmt.Errorf("redis: expected '*', but got line %q", line)
}
return nil
}
func (c *ClusterClient) cmdsMoved(
cmds []Cmder, moved, ask bool, addr string, failedCmds *cmdsMap,
) error {
node, err := c.nodes.Get(addr)
if err != nil {
return err
}
if moved {
c.state.LazyReload()
for _, cmd := range cmds {
failedCmds.Add(node, cmd)
}
return nil
}
if ask {
for _, cmd := range cmds {
failedCmds.Add(node, NewCmd("asking"), cmd)
}
return nil
}
return nil
}
func (c *ClusterClient) Watch(fn func(*Tx) error, keys ...string) error {
return c.WatchContext(c.ctx, fn, keys...)
}
func (c *ClusterClient) WatchContext(ctx context.Context, fn func(*Tx) error, keys ...string) error {
if len(keys) == 0 {
return fmt.Errorf("redis: Watch requires at least one key")
}
slot := hashtag.Slot(keys[0])
for _, key := range keys[1:] {
if hashtag.Slot(key) != slot {
err := fmt.Errorf("redis: Watch requires all keys to be in the same slot")
return err
}
}
node, err := c.slotMasterNode(slot)
if err != nil {
return err
}
for attempt := 0; attempt <= c.opt.MaxRedirects; attempt++ {
if attempt > 0 {
if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
return err
}
}
err = node.Client.WatchContext(ctx, fn, keys...)
if err == nil {
break
}
if err != Nil {
c.state.LazyReload()
}
moved, ask, addr := isMovedError(err)
if moved || ask {
node, err = c.nodes.Get(addr)
if err != nil {
return err
}
continue
}
if err == pool.ErrClosed || isReadOnlyError(err) {
node, err = c.slotMasterNode(slot)
if err != nil {
return err
}
continue
}
if isRetryableError(err, true) {
continue
}
return err
}
return err
}
func (c *ClusterClient) pubSub() *PubSub {
var node *clusterNode
pubsub := &PubSub{
@ -1537,16 +1469,21 @@ func (c *ClusterClient) pubSub() *PubSub {
panic("node != nil")
}
slot := hashtag.Slot(channels[0])
var err error
node, err = c.slotMasterNode(slot)
if len(channels) > 0 {
slot := hashtag.Slot(channels[0])
node, err = c.slotMasterNode(slot)
} else {
node, err = c.nodes.Random()
}
if err != nil {
return nil, err
}
cn, err := node.Client.newConn()
cn, err := node.Client.newConn(context.TODO())
if err != nil {
node = nil
return nil, err
}
@ -1583,6 +1520,98 @@ func (c *ClusterClient) PSubscribe(channels ...string) *PubSub {
return pubsub
}
func (c *ClusterClient) retryBackoff(attempt int) time.Duration {
return internal.RetryBackoff(attempt, c.opt.MinRetryBackoff, c.opt.MaxRetryBackoff)
}
func (c *ClusterClient) cmdsInfo() (map[string]*CommandInfo, error) {
addrs, err := c.nodes.Addrs()
if err != nil {
return nil, err
}
var firstErr error
for _, addr := range addrs {
node, err := c.nodes.Get(addr)
if err != nil {
return nil, err
}
if node == nil {
continue
}
info, err := node.Client.Command().Result()
if err == nil {
return info, nil
}
if firstErr == nil {
firstErr = err
}
}
return nil, firstErr
}
func (c *ClusterClient) cmdInfo(name string) *CommandInfo {
cmdsInfo, err := c.cmdsInfoCache.Get()
if err != nil {
return nil
}
info := cmdsInfo[name]
if info == nil {
internal.Logger.Printf("info for cmd=%s not found", name)
}
return info
}
func (c *ClusterClient) cmdSlot(cmd Cmder) int {
args := cmd.Args()
if args[0] == "cluster" && args[1] == "getkeysinslot" {
return args[2].(int)
}
cmdInfo := c.cmdInfo(cmd.Name())
return cmdSlot(cmd, cmdFirstKeyPos(cmd, cmdInfo))
}
func cmdSlot(cmd Cmder, pos int) int {
if pos == 0 {
return hashtag.RandomSlot()
}
firstKey := cmd.stringArg(pos)
return hashtag.Slot(firstKey)
}
func (c *ClusterClient) cmdNode(cmdInfo *CommandInfo, slot int) (*clusterNode, error) {
state, err := c.state.Get()
if err != nil {
return nil, err
}
if c.opt.ReadOnly && cmdInfo != nil && cmdInfo.ReadOnly {
return c.slotReadOnlyNode(state, slot)
}
return state.slotMasterNode(slot)
}
func (c *clusterClient) slotReadOnlyNode(state *clusterState, slot int) (*clusterNode, error) {
if c.opt.RouteByLatency {
return state.slotClosestNode(slot)
}
if c.opt.RouteRandomly {
return state.slotRandomNode(slot)
}
return state.slotSlaveNode(slot)
}
func (c *ClusterClient) slotMasterNode(slot int) (*clusterNode, error) {
state, err := c.state.Get()
if err != nil {
return nil, err
}
return state.slotMasterNode(slot)
}
func appendUniqueNode(nodes []*clusterNode, node *clusterNode) []*clusterNode {
for _, n := range nodes {
if n == node {
@ -1619,3 +1648,22 @@ func remove(ss []string, es ...string) []string {
}
return ss
}
//------------------------------------------------------------------------------
type cmdsMap struct {
mu sync.Mutex
m map[*clusterNode][]Cmder
}
func newCmdsMap() *cmdsMap {
return &cmdsMap{
m: make(map[*clusterNode][]Cmder),
}
}
func (m *cmdsMap) Add(node *clusterNode, cmds ...Cmder) {
m.mu.Lock()
m.m[node] = append(m.m[node], cmds...)
m.mu.Unlock()
}

View File

@ -14,7 +14,7 @@ func (c *ClusterClient) DBSize() *IntCmd {
return nil
})
if err != nil {
cmd.setErr(err)
cmd.SetErr(err)
return cmd
}
cmd.val = size

View File

@ -7,27 +7,28 @@ import (
"strings"
"time"
"github.com/go-redis/redis/internal"
"github.com/go-redis/redis/internal/proto"
"github.com/go-redis/redis/v7/internal"
"github.com/go-redis/redis/v7/internal/proto"
"github.com/go-redis/redis/v7/internal/util"
)
type Cmder interface {
Name() string
Args() []interface{}
String() string
stringArg(int) string
readReply(rd *proto.Reader) error
setErr(error)
readTimeout() *time.Duration
readReply(rd *proto.Reader) error
SetErr(error)
Err() error
}
func setCmdsErr(cmds []Cmder, e error) {
for _, cmd := range cmds {
if cmd.Err() == nil {
cmd.setErr(e)
cmd.SetErr(e)
}
}
}
@ -41,18 +42,21 @@ func cmdsFirstErr(cmds []Cmder) error {
return nil
}
func writeCmd(wr *proto.Writer, cmds ...Cmder) error {
func writeCmds(wr *proto.Writer, cmds []Cmder) error {
for _, cmd := range cmds {
err := wr.WriteArgs(cmd.Args())
if err != nil {
if err := writeCmd(wr, cmd); err != nil {
return err
}
}
return nil
}
func writeCmd(wr *proto.Writer, cmd Cmder) error {
return wr.WriteArgs(cmd.Args())
}
func cmdString(cmd Cmder, val interface{}) string {
var ss []string
ss := make([]string, 0, len(cmd.Args()))
for _, arg := range cmd.Args() {
ss = append(ss, fmt.Sprint(arg))
}
@ -69,7 +73,6 @@ func cmdString(cmd Cmder, val interface{}) string {
}
}
return s
}
func cmdFirstKeyPos(cmd Cmder, info *CommandInfo) int {
@ -92,38 +95,40 @@ func cmdFirstKeyPos(cmd Cmder, info *CommandInfo) int {
//------------------------------------------------------------------------------
type baseCmd struct {
_args []interface{}
err error
args []interface{}
err error
_readTimeout *time.Duration
}
var _ Cmder = (*Cmd)(nil)
func (cmd *baseCmd) Err() error {
return cmd.err
func (cmd *baseCmd) Name() string {
if len(cmd.args) == 0 {
return ""
}
// Cmd name must be lower cased.
return internal.ToLower(cmd.stringArg(0))
}
func (cmd *baseCmd) Args() []interface{} {
return cmd._args
return cmd.args
}
func (cmd *baseCmd) stringArg(pos int) string {
if pos < 0 || pos >= len(cmd._args) {
if pos < 0 || pos >= len(cmd.args) {
return ""
}
s, _ := cmd._args[pos].(string)
s, _ := cmd.args[pos].(string)
return s
}
func (cmd *baseCmd) Name() string {
if len(cmd._args) > 0 {
// Cmd name must be lower cased.
s := internal.ToLower(cmd.stringArg(0))
cmd._args[0] = s
return s
}
return ""
func (cmd *baseCmd) SetErr(e error) {
cmd.err = e
}
func (cmd *baseCmd) Err() error {
return cmd.err
}
func (cmd *baseCmd) readTimeout() *time.Duration {
@ -134,10 +139,6 @@ func (cmd *baseCmd) setReadTimeout(d time.Duration) {
cmd._readTimeout = &d
}
func (cmd *baseCmd) setErr(e error) {
cmd.err = e
}
//------------------------------------------------------------------------------
type Cmd struct {
@ -148,10 +149,14 @@ type Cmd struct {
func NewCmd(args ...interface{}) *Cmd {
return &Cmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
func (cmd *Cmd) String() string {
return cmdString(cmd, cmd.val)
}
func (cmd *Cmd) Val() interface{} {
return cmd.val
}
@ -160,7 +165,7 @@ func (cmd *Cmd) Result() (interface{}, error) {
return cmd.val, cmd.err
}
func (cmd *Cmd) String() (string, error) {
func (cmd *Cmd) Text() (string, error) {
if cmd.err != nil {
return "", cmd.err
}
@ -218,6 +223,25 @@ func (cmd *Cmd) Uint64() (uint64, error) {
}
}
func (cmd *Cmd) Float32() (float32, error) {
if cmd.err != nil {
return 0, cmd.err
}
switch val := cmd.val.(type) {
case int64:
return float32(val), nil
case string:
f, err := strconv.ParseFloat(val, 32)
if err != nil {
return 0, err
}
return float32(f), nil
default:
err := fmt.Errorf("redis: unexpected type=%T for Float32", val)
return 0, err
}
}
func (cmd *Cmd) Float64() (float64, error) {
if cmd.err != nil {
return 0, cmd.err
@ -255,27 +279,21 @@ func (cmd *Cmd) readReply(rd *proto.Reader) error {
// Implements proto.MultiBulkParse
func sliceParser(rd *proto.Reader, n int64) (interface{}, error) {
vals := make([]interface{}, 0, n)
for i := int64(0); i < n; i++ {
vals := make([]interface{}, n)
for i := 0; i < len(vals); i++ {
v, err := rd.ReadReply(sliceParser)
if err != nil {
if err == Nil {
vals = append(vals, nil)
vals[i] = nil
continue
}
if err, ok := err.(proto.RedisError); ok {
vals = append(vals, err)
vals[i] = err
continue
}
return nil, err
}
switch v := v.(type) {
case string:
vals = append(vals, v)
default:
vals = append(vals, v)
}
vals[i] = v
}
return vals, nil
}
@ -292,7 +310,7 @@ var _ Cmder = (*SliceCmd)(nil)
func NewSliceCmd(args ...interface{}) *SliceCmd {
return &SliceCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -330,7 +348,7 @@ var _ Cmder = (*StatusCmd)(nil)
func NewStatusCmd(args ...interface{}) *StatusCmd {
return &StatusCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -363,7 +381,7 @@ var _ Cmder = (*IntCmd)(nil)
func NewIntCmd(args ...interface{}) *IntCmd {
return &IntCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -375,6 +393,10 @@ func (cmd *IntCmd) Result() (int64, error) {
return cmd.val, cmd.err
}
func (cmd *IntCmd) Uint64() (uint64, error) {
return uint64(cmd.val), cmd.err
}
func (cmd *IntCmd) String() string {
return cmdString(cmd, cmd.val)
}
@ -386,6 +408,49 @@ func (cmd *IntCmd) readReply(rd *proto.Reader) error {
//------------------------------------------------------------------------------
type IntSliceCmd struct {
baseCmd
val []int64
}
var _ Cmder = (*IntSliceCmd)(nil)
func NewIntSliceCmd(args ...interface{}) *IntSliceCmd {
return &IntSliceCmd{
baseCmd: baseCmd{args: args},
}
}
func (cmd *IntSliceCmd) Val() []int64 {
return cmd.val
}
func (cmd *IntSliceCmd) Result() ([]int64, error) {
return cmd.val, cmd.err
}
func (cmd *IntSliceCmd) String() string {
return cmdString(cmd, cmd.val)
}
func (cmd *IntSliceCmd) readReply(rd *proto.Reader) error {
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make([]int64, n)
for i := 0; i < len(cmd.val); i++ {
num, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
cmd.val[i] = num
}
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
type DurationCmd struct {
baseCmd
@ -397,7 +462,7 @@ var _ Cmder = (*DurationCmd)(nil)
func NewDurationCmd(precision time.Duration, args ...interface{}) *DurationCmd {
return &DurationCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
precision: precision,
}
}
@ -420,7 +485,14 @@ func (cmd *DurationCmd) readReply(rd *proto.Reader) error {
if cmd.err != nil {
return cmd.err
}
cmd.val = time.Duration(n) * cmd.precision
switch n {
// -2 if the key does not exist
// -1 if the key exists but has no associated expire
case -2, -1:
cmd.val = time.Duration(n)
default:
cmd.val = time.Duration(n) * cmd.precision
}
return nil
}
@ -436,7 +508,7 @@ var _ Cmder = (*TimeCmd)(nil)
func NewTimeCmd(args ...interface{}) *TimeCmd {
return &TimeCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -453,32 +525,25 @@ func (cmd *TimeCmd) String() string {
}
func (cmd *TimeCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(timeParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.(time.Time)
return nil
}
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
if n != 2 {
return nil, fmt.Errorf("got %d elements, expected 2", n)
}
// Implements proto.MultiBulkParse
func timeParser(rd *proto.Reader, n int64) (interface{}, error) {
if n != 2 {
return nil, fmt.Errorf("got %d elements, expected 2", n)
}
sec, err := rd.ReadInt()
if err != nil {
return nil, err
}
sec, err := rd.ReadInt()
if err != nil {
return nil, err
}
microsec, err := rd.ReadInt()
if err != nil {
return nil, err
}
microsec, err := rd.ReadInt()
if err != nil {
return nil, err
}
return time.Unix(sec, microsec*1000), nil
cmd.val = time.Unix(sec, microsec*1000)
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -493,7 +558,7 @@ var _ Cmder = (*BoolCmd)(nil)
func NewBoolCmd(args ...interface{}) *BoolCmd {
return &BoolCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -514,7 +579,6 @@ func (cmd *BoolCmd) readReply(rd *proto.Reader) error {
v, cmd.err = rd.ReadReply(nil)
// `SET key value NX` returns nil when key already exists. But
// `SETNX key value` returns bool (0/1). So convert nil to bool.
// TODO: is this okay?
if cmd.err == Nil {
cmd.val = false
cmd.err = nil
@ -548,7 +612,7 @@ var _ Cmder = (*StringCmd)(nil)
func NewStringCmd(args ...interface{}) *StringCmd {
return &StringCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -561,7 +625,7 @@ func (cmd *StringCmd) Result() (string, error) {
}
func (cmd *StringCmd) Bytes() ([]byte, error) {
return []byte(cmd.val), cmd.err
return util.StringToBytes(cmd.val), cmd.err
}
func (cmd *StringCmd) Int() (int, error) {
@ -585,6 +649,17 @@ func (cmd *StringCmd) Uint64() (uint64, error) {
return strconv.ParseUint(cmd.Val(), 10, 64)
}
func (cmd *StringCmd) Float32() (float32, error) {
if cmd.err != nil {
return 0, cmd.err
}
f, err := strconv.ParseFloat(cmd.Val(), 32)
if err != nil {
return 0, err
}
return float32(f), nil
}
func (cmd *StringCmd) Float64() (float64, error) {
if cmd.err != nil {
return 0, cmd.err
@ -592,6 +667,13 @@ func (cmd *StringCmd) Float64() (float64, error) {
return strconv.ParseFloat(cmd.Val(), 64)
}
func (cmd *StringCmd) Time() (time.Time, error) {
if cmd.err != nil {
return time.Time{}, cmd.err
}
return time.Parse(time.RFC3339Nano, cmd.Val())
}
func (cmd *StringCmd) Scan(val interface{}) error {
if cmd.err != nil {
return cmd.err
@ -620,7 +702,7 @@ var _ Cmder = (*FloatCmd)(nil)
func NewFloatCmd(args ...interface{}) *FloatCmd {
return &FloatCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -653,7 +735,7 @@ var _ Cmder = (*StringSliceCmd)(nil)
func NewStringSliceCmd(args ...interface{}) *StringSliceCmd {
return &StringSliceCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -674,29 +756,21 @@ func (cmd *StringSliceCmd) ScanSlice(container interface{}) error {
}
func (cmd *StringSliceCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(stringSliceParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.([]string)
return nil
}
// Implements proto.MultiBulkParse
func stringSliceParser(rd *proto.Reader, n int64) (interface{}, error) {
ss := make([]string, 0, n)
for i := int64(0); i < n; i++ {
s, err := rd.ReadString()
if err == Nil {
ss = append(ss, "")
} else if err != nil {
return nil, err
} else {
ss = append(ss, s)
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make([]string, n)
for i := 0; i < len(cmd.val); i++ {
switch s, err := rd.ReadString(); {
case err == Nil:
cmd.val[i] = ""
case err != nil:
return nil, err
default:
cmd.val[i] = s
}
}
}
return ss, nil
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -711,7 +785,7 @@ var _ Cmder = (*BoolSliceCmd)(nil)
func NewBoolSliceCmd(args ...interface{}) *BoolSliceCmd {
return &BoolSliceCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -728,26 +802,18 @@ func (cmd *BoolSliceCmd) String() string {
}
func (cmd *BoolSliceCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(boolSliceParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.([]bool)
return nil
}
// Implements proto.MultiBulkParse
func boolSliceParser(rd *proto.Reader, n int64) (interface{}, error) {
bools := make([]bool, 0, n)
for i := int64(0); i < n; i++ {
n, err := rd.ReadIntReply()
if err != nil {
return nil, err
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make([]bool, n)
for i := 0; i < len(cmd.val); i++ {
n, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
cmd.val[i] = n == 1
}
bools = append(bools, n == 1)
}
return bools, nil
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -762,7 +828,7 @@ var _ Cmder = (*StringStringMapCmd)(nil)
func NewStringStringMapCmd(args ...interface{}) *StringStringMapCmd {
return &StringStringMapCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -779,32 +845,24 @@ func (cmd *StringStringMapCmd) String() string {
}
func (cmd *StringStringMapCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(stringStringMapParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.(map[string]string)
return nil
}
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make(map[string]string, n/2)
for i := int64(0); i < n; i += 2 {
key, err := rd.ReadString()
if err != nil {
return nil, err
}
// Implements proto.MultiBulkParse
func stringStringMapParser(rd *proto.Reader, n int64) (interface{}, error) {
m := make(map[string]string, n/2)
for i := int64(0); i < n; i += 2 {
key, err := rd.ReadString()
if err != nil {
return nil, err
value, err := rd.ReadString()
if err != nil {
return nil, err
}
cmd.val[key] = value
}
value, err := rd.ReadString()
if err != nil {
return nil, err
}
m[key] = value
}
return m, nil
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -819,7 +877,7 @@ var _ Cmder = (*StringIntMapCmd)(nil)
func NewStringIntMapCmd(args ...interface{}) *StringIntMapCmd {
return &StringIntMapCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -836,32 +894,24 @@ func (cmd *StringIntMapCmd) String() string {
}
func (cmd *StringIntMapCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(stringIntMapParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.(map[string]int64)
return nil
}
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make(map[string]int64, n/2)
for i := int64(0); i < n; i += 2 {
key, err := rd.ReadString()
if err != nil {
return nil, err
}
// Implements proto.MultiBulkParse
func stringIntMapParser(rd *proto.Reader, n int64) (interface{}, error) {
m := make(map[string]int64, n/2)
for i := int64(0); i < n; i += 2 {
key, err := rd.ReadString()
if err != nil {
return nil, err
n, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
cmd.val[key] = n
}
n, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
m[key] = n
}
return m, nil
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -876,7 +926,7 @@ var _ Cmder = (*StringStructMapCmd)(nil)
func NewStringStructMapCmd(args ...interface{}) *StringStructMapCmd {
return &StringStructMapCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -893,27 +943,18 @@ func (cmd *StringStructMapCmd) String() string {
}
func (cmd *StringStructMapCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(stringStructMapParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.(map[string]struct{})
return nil
}
// Implements proto.MultiBulkParse
func stringStructMapParser(rd *proto.Reader, n int64) (interface{}, error) {
m := make(map[string]struct{}, n)
for i := int64(0); i < n; i++ {
key, err := rd.ReadString()
if err != nil {
return nil, err
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make(map[string]struct{}, n)
for i := int64(0); i < n; i++ {
key, err := rd.ReadString()
if err != nil {
return nil, err
}
cmd.val[key] = struct{}{}
}
m[key] = struct{}{}
}
return m, nil
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -933,7 +974,7 @@ var _ Cmder = (*XMessageSliceCmd)(nil)
func NewXMessageSliceCmd(args ...interface{}) *XMessageSliceCmd {
return &XMessageSliceCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -961,23 +1002,30 @@ func (cmd *XMessageSliceCmd) readReply(rd *proto.Reader) error {
// Implements proto.MultiBulkParse
func xMessageSliceParser(rd *proto.Reader, n int64) (interface{}, error) {
msgs := make([]XMessage, 0, n)
for i := int64(0); i < n; i++ {
msgs := make([]XMessage, n)
for i := 0; i < len(msgs); i++ {
i := i
_, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
id, err := rd.ReadString()
if err != nil {
return nil, err
}
var values map[string]interface{}
v, err := rd.ReadArrayReply(stringInterfaceMapParser)
if err != nil {
return nil, err
if err != proto.Nil {
return nil, err
}
} else {
values = v.(map[string]interface{})
}
msgs = append(msgs, XMessage{
msgs[i] = XMessage{
ID: id,
Values: v.(map[string]interface{}),
})
Values: values,
}
return nil, nil
})
if err != nil {
@ -1023,7 +1071,7 @@ var _ Cmder = (*XStreamSliceCmd)(nil)
func NewXStreamSliceCmd(args ...interface{}) *XStreamSliceCmd {
return &XStreamSliceCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -1040,45 +1088,38 @@ func (cmd *XStreamSliceCmd) String() string {
}
func (cmd *XStreamSliceCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(xStreamSliceParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.([]XStream)
return nil
}
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make([]XStream, n)
for i := 0; i < len(cmd.val); i++ {
i := i
_, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
if n != 2 {
return nil, fmt.Errorf("got %d, wanted 2", n)
}
// Implements proto.MultiBulkParse
func xStreamSliceParser(rd *proto.Reader, n int64) (interface{}, error) {
ret := make([]XStream, 0, n)
for i := int64(0); i < n; i++ {
_, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
if n != 2 {
return nil, fmt.Errorf("got %d, wanted 2", n)
}
stream, err := rd.ReadString()
if err != nil {
return nil, err
}
stream, err := rd.ReadString()
if err != nil {
return nil, err
}
v, err := rd.ReadArrayReply(xMessageSliceParser)
if err != nil {
return nil, err
}
v, err := rd.ReadArrayReply(xMessageSliceParser)
if err != nil {
return nil, err
}
ret = append(ret, XStream{
Stream: stream,
Messages: v.([]XMessage),
cmd.val[i] = XStream{
Stream: stream,
Messages: v.([]XMessage),
}
return nil, nil
})
return nil, nil
})
if err != nil {
return nil, err
if err != nil {
return nil, err
}
}
}
return ret, nil
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -1099,7 +1140,7 @@ var _ Cmder = (*XPendingCmd)(nil)
func NewXPendingCmd(args ...interface{}) *XPendingCmd {
return &XPendingCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -1116,81 +1157,74 @@ func (cmd *XPendingCmd) String() string {
}
func (cmd *XPendingCmd) readReply(rd *proto.Reader) error {
var info interface{}
info, cmd.err = rd.ReadArrayReply(xPendingParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = info.(*XPending)
return nil
}
func xPendingParser(rd *proto.Reader, n int64) (interface{}, error) {
if n != 4 {
return nil, fmt.Errorf("got %d, wanted 4", n)
}
count, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
lower, err := rd.ReadString()
if err != nil && err != Nil {
return nil, err
}
higher, err := rd.ReadString()
if err != nil && err != Nil {
return nil, err
}
pending := &XPending{
Count: count,
Lower: lower,
Higher: higher,
}
_, err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
for i := int64(0); i < n; i++ {
_, err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
if n != 2 {
return nil, fmt.Errorf("got %d, wanted 2", n)
}
consumerName, err := rd.ReadString()
if err != nil {
return nil, err
}
consumerPending, err := rd.ReadInt()
if err != nil {
return nil, err
}
if pending.Consumers == nil {
pending.Consumers = make(map[string]int64)
}
pending.Consumers[consumerName] = consumerPending
return nil, nil
})
if err != nil {
return nil, err
}
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
if n != 4 {
return nil, fmt.Errorf("got %d, wanted 4", n)
}
count, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
lower, err := rd.ReadString()
if err != nil && err != Nil {
return nil, err
}
higher, err := rd.ReadString()
if err != nil && err != Nil {
return nil, err
}
cmd.val = &XPending{
Count: count,
Lower: lower,
Higher: higher,
}
_, err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
for i := int64(0); i < n; i++ {
_, err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
if n != 2 {
return nil, fmt.Errorf("got %d, wanted 2", n)
}
consumerName, err := rd.ReadString()
if err != nil {
return nil, err
}
consumerPending, err := rd.ReadInt()
if err != nil {
return nil, err
}
if cmd.val.Consumers == nil {
cmd.val.Consumers = make(map[string]int64)
}
cmd.val.Consumers[consumerName] = consumerPending
return nil, nil
})
if err != nil {
return nil, err
}
}
return nil, nil
})
if err != nil && err != Nil {
return nil, err
}
return nil, nil
})
if err != nil && err != Nil {
return nil, err
}
return pending, nil
return cmd.err
}
//------------------------------------------------------------------------------
type XPendingExt struct {
Id string
ID string
Consumer string
Idle time.Duration
RetryCount int64
@ -1205,7 +1239,7 @@ var _ Cmder = (*XPendingExtCmd)(nil)
func NewXPendingExtCmd(args ...interface{}) *XPendingExtCmd {
return &XPendingExtCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -1222,62 +1256,143 @@ func (cmd *XPendingExtCmd) String() string {
}
func (cmd *XPendingExtCmd) readReply(rd *proto.Reader) error {
var info interface{}
info, cmd.err = rd.ReadArrayReply(xPendingExtSliceParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = info.([]XPendingExt)
return nil
}
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make([]XPendingExt, 0, n)
for i := int64(0); i < n; i++ {
_, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
if n != 4 {
return nil, fmt.Errorf("got %d, wanted 4", n)
}
func xPendingExtSliceParser(rd *proto.Reader, n int64) (interface{}, error) {
ret := make([]XPendingExt, 0, n)
for i := int64(0); i < n; i++ {
_, err := rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
if n != 4 {
return nil, fmt.Errorf("got %d, wanted 4", n)
}
id, err := rd.ReadString()
if err != nil {
return nil, err
}
id, err := rd.ReadString()
consumer, err := rd.ReadString()
if err != nil && err != Nil {
return nil, err
}
idle, err := rd.ReadIntReply()
if err != nil && err != Nil {
return nil, err
}
retryCount, err := rd.ReadIntReply()
if err != nil && err != Nil {
return nil, err
}
cmd.val = append(cmd.val, XPendingExt{
ID: id,
Consumer: consumer,
Idle: time.Duration(idle) * time.Millisecond,
RetryCount: retryCount,
})
return nil, nil
})
if err != nil {
return nil, err
}
}
return nil, nil
})
return cmd.err
}
consumer, err := rd.ReadString()
if err != nil && err != Nil {
return nil, err
//------------------------------------------------------------------------------
type XInfoGroupsCmd struct {
baseCmd
val []XInfoGroups
}
type XInfoGroups struct {
Name string
Consumers int64
Pending int64
LastDeliveredID string
}
var _ Cmder = (*XInfoGroupsCmd)(nil)
func NewXInfoGroupsCmd(stream string) *XInfoGroupsCmd {
return &XInfoGroupsCmd{
baseCmd: baseCmd{args: []interface{}{"xinfo", "groups", stream}},
}
}
func (cmd *XInfoGroupsCmd) Val() []XInfoGroups {
return cmd.val
}
func (cmd *XInfoGroupsCmd) Result() ([]XInfoGroups, error) {
return cmd.val, cmd.err
}
func (cmd *XInfoGroupsCmd) String() string {
return cmdString(cmd, cmd.val)
}
func (cmd *XInfoGroupsCmd) readReply(rd *proto.Reader) error {
_, cmd.err = rd.ReadArrayReply(
func(rd *proto.Reader, n int64) (interface{}, error) {
for i := int64(0); i < n; i++ {
v, err := rd.ReadReply(xGroupInfoParser)
if err != nil {
return nil, err
}
cmd.val = append(cmd.val, v.(XInfoGroups))
}
idle, err := rd.ReadIntReply()
if err != nil && err != Nil {
return nil, err
}
retryCount, err := rd.ReadIntReply()
if err != nil && err != Nil {
return nil, err
}
ret = append(ret, XPendingExt{
Id: id,
Consumer: consumer,
Idle: time.Duration(idle) * time.Millisecond,
RetryCount: retryCount,
})
return nil, nil
})
return nil
}
func xGroupInfoParser(rd *proto.Reader, n int64) (interface{}, error) {
if n != 8 {
return nil, fmt.Errorf("redis: got %d elements in XINFO GROUPS reply,"+
"wanted 8", n)
}
var (
err error
grp XInfoGroups
key string
val string
)
for i := 0; i < 4; i++ {
key, err = rd.ReadString()
if err != nil {
return nil, err
}
val, err = rd.ReadString()
if err != nil {
return nil, err
}
switch key {
case "name":
grp.Name = val
case "consumers":
grp.Consumers, err = strconv.ParseInt(val, 0, 64)
case "pending":
grp.Pending, err = strconv.ParseInt(val, 0, 64)
case "last-delivered-id":
grp.LastDeliveredID = val
default:
return nil, fmt.Errorf("redis: unexpected content %s "+
"in XINFO GROUPS reply", key)
}
if err != nil {
return nil, err
}
}
return ret, nil
return grp, err
}
//------------------------------------------------------------------------------
//------------------------------------------------------------------------------
type ZSliceCmd struct {
baseCmd
@ -1288,7 +1403,7 @@ var _ Cmder = (*ZSliceCmd)(nil)
func NewZSliceCmd(args ...interface{}) *ZSliceCmd {
return &ZSliceCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -1305,34 +1420,27 @@ func (cmd *ZSliceCmd) String() string {
}
func (cmd *ZSliceCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(zSliceParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.([]Z)
return nil
}
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make([]Z, n/2)
for i := 0; i < len(cmd.val); i++ {
member, err := rd.ReadString()
if err != nil {
return nil, err
}
// Implements proto.MultiBulkParse
func zSliceParser(rd *proto.Reader, n int64) (interface{}, error) {
zz := make([]Z, n/2)
for i := int64(0); i < n; i += 2 {
var err error
score, err := rd.ReadFloatReply()
if err != nil {
return nil, err
}
z := &zz[i/2]
z.Member, err = rd.ReadString()
if err != nil {
return nil, err
cmd.val[i] = Z{
Member: member,
Score: score,
}
}
z.Score, err = rd.ReadFloatReply()
if err != nil {
return nil, err
}
}
return zz, nil
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -1340,22 +1448,22 @@ func zSliceParser(rd *proto.Reader, n int64) (interface{}, error) {
type ZWithKeyCmd struct {
baseCmd
val ZWithKey
val *ZWithKey
}
var _ Cmder = (*ZWithKeyCmd)(nil)
func NewZWithKeyCmd(args ...interface{}) *ZWithKeyCmd {
return &ZWithKeyCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
func (cmd *ZWithKeyCmd) Val() ZWithKey {
func (cmd *ZWithKeyCmd) Val() *ZWithKey {
return cmd.val
}
func (cmd *ZWithKeyCmd) Result() (ZWithKey, error) {
func (cmd *ZWithKeyCmd) Result() (*ZWithKey, error) {
return cmd.Val(), cmd.Err()
}
@ -1364,37 +1472,32 @@ func (cmd *ZWithKeyCmd) String() string {
}
func (cmd *ZWithKeyCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(zWithKeyParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.(ZWithKey)
return nil
}
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
if n != 3 {
return nil, fmt.Errorf("got %d elements, expected 3", n)
}
// Implements proto.MultiBulkParse
func zWithKeyParser(rd *proto.Reader, n int64) (interface{}, error) {
if n != 3 {
return nil, fmt.Errorf("got %d elements, expected 3", n)
}
cmd.val = &ZWithKey{}
var err error
var z ZWithKey
var err error
cmd.val.Key, err = rd.ReadString()
if err != nil {
return nil, err
}
z.Key, err = rd.ReadString()
if err != nil {
return nil, err
}
z.Member, err = rd.ReadString()
if err != nil {
return nil, err
}
z.Score, err = rd.ReadFloatReply()
if err != nil {
return nil, err
}
return z, nil
cmd.val.Member, err = rd.ReadString()
if err != nil {
return nil, err
}
cmd.val.Score, err = rd.ReadFloatReply()
if err != nil {
return nil, err
}
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -1412,7 +1515,7 @@ var _ Cmder = (*ScanCmd)(nil)
func NewScanCmd(process func(cmd Cmder) error, args ...interface{}) *ScanCmd {
return &ScanCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
process: process,
}
}
@ -1444,7 +1547,7 @@ func (cmd *ScanCmd) Iterator() *ScanIterator {
//------------------------------------------------------------------------------
type ClusterNode struct {
Id string
ID string
Addr string
}
@ -1464,7 +1567,7 @@ var _ Cmder = (*ClusterSlotsCmd)(nil)
func NewClusterSlotsCmd(args ...interface{}) *ClusterSlotsCmd {
return &ClusterSlotsCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -1481,77 +1584,69 @@ func (cmd *ClusterSlotsCmd) String() string {
}
func (cmd *ClusterSlotsCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(clusterSlotsParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.([]ClusterSlot)
return nil
}
// Implements proto.MultiBulkParse
func clusterSlotsParser(rd *proto.Reader, n int64) (interface{}, error) {
slots := make([]ClusterSlot, n)
for i := 0; i < len(slots); i++ {
n, err := rd.ReadArrayLen()
if err != nil {
return nil, err
}
if n < 2 {
err := fmt.Errorf("redis: got %d elements in cluster info, expected at least 2", n)
return nil, err
}
start, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
end, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
nodes := make([]ClusterNode, n-2)
for j := 0; j < len(nodes); j++ {
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make([]ClusterSlot, n)
for i := 0; i < len(cmd.val); i++ {
n, err := rd.ReadArrayLen()
if err != nil {
return nil, err
}
if n != 2 && n != 3 {
err := fmt.Errorf("got %d elements in cluster info address, expected 2 or 3", n)
if n < 2 {
err := fmt.Errorf("redis: got %d elements in cluster info, expected at least 2", n)
return nil, err
}
ip, err := rd.ReadString()
start, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
port, err := rd.ReadString()
end, err := rd.ReadIntReply()
if err != nil {
return nil, err
}
nodes[j].Addr = net.JoinHostPort(ip, port)
if n == 3 {
id, err := rd.ReadString()
nodes := make([]ClusterNode, n-2)
for j := 0; j < len(nodes); j++ {
n, err := rd.ReadArrayLen()
if err != nil {
return nil, err
}
nodes[j].Id = id
if n != 2 && n != 3 {
err := fmt.Errorf("got %d elements in cluster info address, expected 2 or 3", n)
return nil, err
}
ip, err := rd.ReadString()
if err != nil {
return nil, err
}
port, err := rd.ReadString()
if err != nil {
return nil, err
}
nodes[j].Addr = net.JoinHostPort(ip, port)
if n == 3 {
id, err := rd.ReadString()
if err != nil {
return nil, err
}
nodes[j].ID = id
}
}
cmd.val[i] = ClusterSlot{
Start: int(start),
End: int(end),
Nodes: nodes,
}
}
slots[i] = ClusterSlot{
Start: int(start),
End: int(end),
Nodes: nodes,
}
}
return slots, nil
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -1588,6 +1683,13 @@ type GeoLocationCmd struct {
var _ Cmder = (*GeoLocationCmd)(nil)
func NewGeoLocationCmd(q *GeoRadiusQuery, args ...interface{}) *GeoLocationCmd {
return &GeoLocationCmd{
baseCmd: baseCmd{args: geoLocationArgs(q, args...)},
q: q,
}
}
func geoLocationArgs(q *GeoRadiusQuery, args ...interface{}) []interface{} {
args = append(args, q.Radius)
if q.Unit != "" {
args = append(args, q.Unit)
@ -1617,10 +1719,7 @@ func NewGeoLocationCmd(q *GeoRadiusQuery, args ...interface{}) *GeoLocationCmd {
args = append(args, "storedist")
args = append(args, q.StoreDist)
}
return &GeoLocationCmd{
baseCmd: baseCmd{_args: args},
q: q,
}
return args
}
func (cmd *GeoLocationCmd) Val() []GeoLocation {
@ -1645,6 +1744,30 @@ func (cmd *GeoLocationCmd) readReply(rd *proto.Reader) error {
return nil
}
func newGeoLocationSliceParser(q *GeoRadiusQuery) proto.MultiBulkParse {
return func(rd *proto.Reader, n int64) (interface{}, error) {
locs := make([]GeoLocation, 0, n)
for i := int64(0); i < n; i++ {
v, err := rd.ReadReply(newGeoLocationParser(q))
if err != nil {
return nil, err
}
switch vv := v.(type) {
case string:
locs = append(locs, GeoLocation{
Name: vv,
})
case *GeoLocation:
//TODO: avoid copying
locs = append(locs, *vv)
default:
return nil, fmt.Errorf("got %T, expected string or *GeoLocation", v)
}
}
return locs, nil
}
}
func newGeoLocationParser(q *GeoRadiusQuery) proto.MultiBulkParse {
return func(rd *proto.Reader, n int64) (interface{}, error) {
var loc GeoLocation
@ -1689,29 +1812,6 @@ func newGeoLocationParser(q *GeoRadiusQuery) proto.MultiBulkParse {
}
}
func newGeoLocationSliceParser(q *GeoRadiusQuery) proto.MultiBulkParse {
return func(rd *proto.Reader, n int64) (interface{}, error) {
locs := make([]GeoLocation, 0, n)
for i := int64(0); i < n; i++ {
v, err := rd.ReadReply(newGeoLocationParser(q))
if err != nil {
return nil, err
}
switch vv := v.(type) {
case string:
locs = append(locs, GeoLocation{
Name: vv,
})
case *GeoLocation:
locs = append(locs, *vv)
default:
return nil, fmt.Errorf("got %T, expected string or *GeoLocation", v)
}
}
return locs, nil
}
}
//------------------------------------------------------------------------------
type GeoPos struct {
@ -1721,19 +1821,19 @@ type GeoPos struct {
type GeoPosCmd struct {
baseCmd
positions []*GeoPos
val []*GeoPos
}
var _ Cmder = (*GeoPosCmd)(nil)
func NewGeoPosCmd(args ...interface{}) *GeoPosCmd {
return &GeoPosCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
func (cmd *GeoPosCmd) Val() []*GeoPos {
return cmd.positions
return cmd.val
}
func (cmd *GeoPosCmd) Result() ([]*GeoPos, error) {
@ -1741,55 +1841,42 @@ func (cmd *GeoPosCmd) Result() ([]*GeoPos, error) {
}
func (cmd *GeoPosCmd) String() string {
return cmdString(cmd, cmd.positions)
return cmdString(cmd, cmd.val)
}
func (cmd *GeoPosCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(geoPosSliceParser)
if cmd.err != nil {
return cmd.err
}
cmd.positions = v.([]*GeoPos)
return nil
}
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make([]*GeoPos, n)
for i := 0; i < len(cmd.val); i++ {
i := i
_, err := rd.ReadReply(func(rd *proto.Reader, n int64) (interface{}, error) {
longitude, err := rd.ReadFloatReply()
if err != nil {
return nil, err
}
func geoPosSliceParser(rd *proto.Reader, n int64) (interface{}, error) {
positions := make([]*GeoPos, 0, n)
for i := int64(0); i < n; i++ {
v, err := rd.ReadReply(geoPosParser)
if err != nil {
if err == Nil {
positions = append(positions, nil)
continue
latitude, err := rd.ReadFloatReply()
if err != nil {
return nil, err
}
cmd.val[i] = &GeoPos{
Longitude: longitude,
Latitude: latitude,
}
return nil, nil
})
if err != nil {
if err == Nil {
cmd.val[i] = nil
continue
}
return nil, err
}
return nil, err
}
switch v := v.(type) {
case *GeoPos:
positions = append(positions, v)
default:
return nil, fmt.Errorf("got %T, expected *GeoPos", v)
}
}
return positions, nil
}
func geoPosParser(rd *proto.Reader, n int64) (interface{}, error) {
var pos GeoPos
var err error
pos.Longitude, err = rd.ReadFloatReply()
if err != nil {
return nil, err
}
pos.Latitude, err = rd.ReadFloatReply()
if err != nil {
return nil, err
}
return &pos, nil
return nil, nil
})
return cmd.err
}
//------------------------------------------------------------------------------
@ -1798,6 +1885,7 @@ type CommandInfo struct {
Name string
Arity int8
Flags []string
ACLFlags []string
FirstKeyPos int8
LastKeyPos int8
StepCount int8
@ -1814,7 +1902,7 @@ var _ Cmder = (*CommandsInfoCmd)(nil)
func NewCommandsInfoCmd(args ...interface{}) *CommandsInfoCmd {
return &CommandsInfoCmd{
baseCmd: baseCmd{_args: args},
baseCmd: baseCmd{args: args},
}
}
@ -1831,38 +1919,35 @@ func (cmd *CommandsInfoCmd) String() string {
}
func (cmd *CommandsInfoCmd) readReply(rd *proto.Reader) error {
var v interface{}
v, cmd.err = rd.ReadArrayReply(commandInfoSliceParser)
if cmd.err != nil {
return cmd.err
}
cmd.val = v.(map[string]*CommandInfo)
return nil
}
// Implements proto.MultiBulkParse
func commandInfoSliceParser(rd *proto.Reader, n int64) (interface{}, error) {
m := make(map[string]*CommandInfo, n)
for i := int64(0); i < n; i++ {
v, err := rd.ReadReply(commandInfoParser)
if err != nil {
return nil, err
_, cmd.err = rd.ReadArrayReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.val = make(map[string]*CommandInfo, n)
for i := int64(0); i < n; i++ {
v, err := rd.ReadReply(commandInfoParser)
if err != nil {
return nil, err
}
vv := v.(*CommandInfo)
cmd.val[vv.Name] = vv
}
vv := v.(*CommandInfo)
m[vv.Name] = vv
}
return m, nil
return nil, nil
})
return cmd.err
}
func commandInfoParser(rd *proto.Reader, n int64) (interface{}, error) {
const numArgRedis5 = 6
const numArgRedis6 = 7
switch n {
case numArgRedis5, numArgRedis6:
// continue
default:
return nil, fmt.Errorf("redis: got %d elements in COMMAND reply, wanted 7", n)
}
var cmd CommandInfo
var err error
if n != 6 {
return nil, fmt.Errorf("redis: got %d elements in COMMAND reply, wanted 6", n)
}
cmd.Name, err = rd.ReadString()
if err != nil {
return nil, err
@ -1874,11 +1959,23 @@ func commandInfoParser(rd *proto.Reader, n int64) (interface{}, error) {
}
cmd.Arity = int8(arity)
flags, err := rd.ReadReply(stringSliceParser)
_, err = rd.ReadReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.Flags = make([]string, n)
for i := 0; i < len(cmd.Flags); i++ {
switch s, err := rd.ReadString(); {
case err == Nil:
cmd.Flags[i] = ""
case err != nil:
return nil, err
default:
cmd.Flags[i] = s
}
}
return nil, nil
})
if err != nil {
return nil, err
}
cmd.Flags = flags.([]string)
firstKeyPos, err := rd.ReadIntReply()
if err != nil {
@ -1905,6 +2002,28 @@ func commandInfoParser(rd *proto.Reader, n int64) (interface{}, error) {
}
}
if n == numArgRedis5 {
return &cmd, nil
}
_, err = rd.ReadReply(func(rd *proto.Reader, n int64) (interface{}, error) {
cmd.ACLFlags = make([]string, n)
for i := 0; i < len(cmd.ACLFlags); i++ {
switch s, err := rd.ReadString(); {
case err == Nil:
cmd.ACLFlags[i] = ""
case err != nil:
return nil, err
default:
cmd.ACLFlags[i] = s
}
}
return nil, nil
})
if err != nil {
return nil, err
}
return &cmd, nil
}
@ -1929,6 +2048,15 @@ func (c *cmdsInfoCache) Get() (map[string]*CommandInfo, error) {
if err != nil {
return err
}
// Extensions have cmd names in upper case. Convert them to lower case.
for k, v := range cmds {
lower := internal.ToLower(k)
if lower != k {
cmds[lower] = v
}
}
c.cmds = cmds
return nil
})

View File

@ -5,7 +5,7 @@ import (
"io"
"time"
"github.com/go-redis/redis/internal"
"github.com/go-redis/redis/v7/internal"
)
func usePrecise(dur time.Duration) bool {
@ -14,7 +14,7 @@ func usePrecise(dur time.Duration) bool {
func formatMs(dur time.Duration) int64 {
if dur > 0 && dur < time.Millisecond {
internal.Logf(
internal.Logger.Printf(
"specified duration is %s, but minimal supported value is %s",
dur, time.Millisecond,
)
@ -24,7 +24,7 @@ func formatMs(dur time.Duration) int64 {
func formatSec(dur time.Duration) int64 {
if dur > 0 && dur < time.Second {
internal.Logf(
internal.Logger.Printf(
"specified duration is %s, but minimal supported value is %s",
dur, time.Second,
)
@ -34,17 +34,21 @@ func formatSec(dur time.Duration) int64 {
func appendArgs(dst, src []interface{}) []interface{} {
if len(src) == 1 {
if ss, ok := src[0].([]string); ok {
for _, s := range ss {
switch v := src[0].(type) {
case []string:
for _, s := range v {
dst = append(dst, s)
}
return dst
case map[string]interface{}:
for k, v := range v {
dst = append(dst, k, v)
}
return dst
}
}
for _, v := range src {
dst = append(dst, v)
}
dst = append(dst, src...)
return dst
}
@ -67,8 +71,8 @@ type Cmdable interface {
Expire(key string, expiration time.Duration) *BoolCmd
ExpireAt(key string, tm time.Time) *BoolCmd
Keys(pattern string) *StringSliceCmd
Migrate(host, port, key string, db int64, timeout time.Duration) *StatusCmd
Move(key string, db int64) *BoolCmd
Migrate(host, port, key string, db int, timeout time.Duration) *StatusCmd
Move(key string, db int) *BoolCmd
ObjectRefCount(key string) *IntCmd
ObjectEncoding(key string) *StringCmd
ObjectIdleTime(key string) *DurationCmd
@ -98,6 +102,7 @@ type Cmdable interface {
BitOpXor(destKey string, keys ...string) *IntCmd
BitOpNot(destKey string, key string) *IntCmd
BitPos(key string, bit int64, pos ...int64) *IntCmd
BitField(key string, args ...interface{}) *IntSliceCmd
Decr(key string) *IntCmd
DecrBy(key string, decrement int64) *IntCmd
Get(key string) *StringCmd
@ -108,8 +113,8 @@ type Cmdable interface {
IncrBy(key string, value int64) *IntCmd
IncrByFloat(key string, value float64) *FloatCmd
MGet(keys ...string) *SliceCmd
MSet(pairs ...interface{}) *StatusCmd
MSetNX(pairs ...interface{}) *BoolCmd
MSet(values ...interface{}) *StatusCmd
MSetNX(values ...interface{}) *BoolCmd
Set(key string, value interface{}, expiration time.Duration) *StatusCmd
SetBit(key string, offset int64, value int) *IntCmd
SetNX(key string, value interface{}, expiration time.Duration) *BoolCmd
@ -125,8 +130,8 @@ type Cmdable interface {
HKeys(key string) *StringSliceCmd
HLen(key string) *IntCmd
HMGet(key string, fields ...string) *SliceCmd
HMSet(key string, fields map[string]interface{}) *StatusCmd
HSet(key, field string, value interface{}) *BoolCmd
HSet(key string, values ...interface{}) *IntCmd
HMSet(key string, values ...interface{}) *BoolCmd
HSetNX(key, field string, value interface{}) *BoolCmd
HVals(key string) *StringSliceCmd
BLPop(timeout time.Duration, keys ...string) *StringSliceCmd
@ -139,7 +144,7 @@ type Cmdable interface {
LLen(key string) *IntCmd
LPop(key string) *StringCmd
LPush(key string, values ...interface{}) *IntCmd
LPushX(key string, value interface{}) *IntCmd
LPushX(key string, values ...interface{}) *IntCmd
LRange(key string, start, stop int64) *StringSliceCmd
LRem(key string, count int64, value interface{}) *IntCmd
LSet(key string, index int64, value interface{}) *StatusCmd
@ -147,7 +152,7 @@ type Cmdable interface {
RPop(key string) *StringCmd
RPopLPush(source, destination string) *StringCmd
RPush(key string, values ...interface{}) *IntCmd
RPushX(key string, value interface{}) *IntCmd
RPushX(key string, values ...interface{}) *IntCmd
SAdd(key string, members ...interface{}) *IntCmd
SCard(key string) *IntCmd
SDiff(keys ...string) *StringSliceCmd
@ -187,29 +192,30 @@ type Cmdable interface {
XClaimJustID(a *XClaimArgs) *StringSliceCmd
XTrim(key string, maxLen int64) *IntCmd
XTrimApprox(key string, maxLen int64) *IntCmd
XInfoGroups(key string) *XInfoGroupsCmd
BZPopMax(timeout time.Duration, keys ...string) *ZWithKeyCmd
BZPopMin(timeout time.Duration, keys ...string) *ZWithKeyCmd
ZAdd(key string, members ...Z) *IntCmd
ZAddNX(key string, members ...Z) *IntCmd
ZAddXX(key string, members ...Z) *IntCmd
ZAddCh(key string, members ...Z) *IntCmd
ZAddNXCh(key string, members ...Z) *IntCmd
ZAddXXCh(key string, members ...Z) *IntCmd
ZIncr(key string, member Z) *FloatCmd
ZIncrNX(key string, member Z) *FloatCmd
ZIncrXX(key string, member Z) *FloatCmd
ZAdd(key string, members ...*Z) *IntCmd
ZAddNX(key string, members ...*Z) *IntCmd
ZAddXX(key string, members ...*Z) *IntCmd
ZAddCh(key string, members ...*Z) *IntCmd
ZAddNXCh(key string, members ...*Z) *IntCmd
ZAddXXCh(key string, members ...*Z) *IntCmd
ZIncr(key string, member *Z) *FloatCmd
ZIncrNX(key string, member *Z) *FloatCmd
ZIncrXX(key string, member *Z) *FloatCmd
ZCard(key string) *IntCmd
ZCount(key, min, max string) *IntCmd
ZLexCount(key, min, max string) *IntCmd
ZIncrBy(key string, increment float64, member string) *FloatCmd
ZInterStore(destination string, store ZStore, keys ...string) *IntCmd
ZInterStore(destination string, store *ZStore) *IntCmd
ZPopMax(key string, count ...int64) *ZSliceCmd
ZPopMin(key string, count ...int64) *ZSliceCmd
ZRange(key string, start, stop int64) *StringSliceCmd
ZRangeWithScores(key string, start, stop int64) *ZSliceCmd
ZRangeByScore(key string, opt ZRangeBy) *StringSliceCmd
ZRangeByLex(key string, opt ZRangeBy) *StringSliceCmd
ZRangeByScoreWithScores(key string, opt ZRangeBy) *ZSliceCmd
ZRangeByScore(key string, opt *ZRangeBy) *StringSliceCmd
ZRangeByLex(key string, opt *ZRangeBy) *StringSliceCmd
ZRangeByScoreWithScores(key string, opt *ZRangeBy) *ZSliceCmd
ZRank(key, member string) *IntCmd
ZRem(key string, members ...interface{}) *IntCmd
ZRemRangeByRank(key string, start, stop int64) *IntCmd
@ -217,12 +223,12 @@ type Cmdable interface {
ZRemRangeByLex(key, min, max string) *IntCmd
ZRevRange(key string, start, stop int64) *StringSliceCmd
ZRevRangeWithScores(key string, start, stop int64) *ZSliceCmd
ZRevRangeByScore(key string, opt ZRangeBy) *StringSliceCmd
ZRevRangeByLex(key string, opt ZRangeBy) *StringSliceCmd
ZRevRangeByScoreWithScores(key string, opt ZRangeBy) *ZSliceCmd
ZRevRangeByScore(key string, opt *ZRangeBy) *StringSliceCmd
ZRevRangeByLex(key string, opt *ZRangeBy) *StringSliceCmd
ZRevRangeByScoreWithScores(key string, opt *ZRangeBy) *ZSliceCmd
ZRevRank(key, member string) *IntCmd
ZScore(key, member string) *FloatCmd
ZUnionStore(dest string, store ZStore, keys ...string) *IntCmd
ZUnionStore(dest string, store *ZStore) *IntCmd
PFAdd(key string, els ...interface{}) *IntCmd
PFCount(keys ...string) *IntCmd
PFMerge(dest string, keys ...string) *StatusCmd
@ -283,9 +289,9 @@ type Cmdable interface {
GeoAdd(key string, geoLocation ...*GeoLocation) *IntCmd
GeoPos(key string, members ...string) *GeoPosCmd
GeoRadius(key string, longitude, latitude float64, query *GeoRadiusQuery) *GeoLocationCmd
GeoRadiusRO(key string, longitude, latitude float64, query *GeoRadiusQuery) *GeoLocationCmd
GeoRadiusStore(key string, longitude, latitude float64, query *GeoRadiusQuery) *IntCmd
GeoRadiusByMember(key, member string, query *GeoRadiusQuery) *GeoLocationCmd
GeoRadiusByMemberRO(key, member string, query *GeoRadiusQuery) *GeoLocationCmd
GeoRadiusByMemberStore(key, member string, query *GeoRadiusQuery) *IntCmd
GeoDist(key string, member1, member2, unit string) *FloatCmd
GeoHash(key string, members ...string) *StringSliceCmd
ReadOnly() *StatusCmd
@ -296,6 +302,7 @@ type Cmdable interface {
type StatefulCmdable interface {
Cmdable
Auth(password string) *StatusCmd
AuthACL(username, password string) *StatusCmd
Select(index int) *StatusCmd
SwapDB(index1, index2 int) *StatusCmd
ClientSetName(name string) *BoolCmd
@ -306,132 +313,127 @@ var _ Cmdable = (*Tx)(nil)
var _ Cmdable = (*Ring)(nil)
var _ Cmdable = (*ClusterClient)(nil)
type cmdable struct {
process func(cmd Cmder) error
}
type cmdable func(cmd Cmder) error
func (c *cmdable) setProcessor(fn func(Cmder) error) {
c.process = fn
}
type statefulCmdable struct {
cmdable
process func(cmd Cmder) error
}
func (c *statefulCmdable) setProcessor(fn func(Cmder) error) {
c.process = fn
c.cmdable.setProcessor(fn)
}
type statefulCmdable func(cmd Cmder) error
//------------------------------------------------------------------------------
func (c *statefulCmdable) Auth(password string) *StatusCmd {
func (c statefulCmdable) Auth(password string) *StatusCmd {
cmd := NewStatusCmd("auth", password)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Echo(message interface{}) *StringCmd {
// Perform an AUTH command, using the given user and pass.
// Should be used to authenticate the current connection with one of the connections defined in the ACL list
// when connecting to a Redis 6.0 instance, or greater, that is using the Redis ACL system.
func (c statefulCmdable) AuthACL(username, password string) *StatusCmd {
cmd := NewStatusCmd("auth", username, password)
_ = c(cmd)
return cmd
}
func (c cmdable) Echo(message interface{}) *StringCmd {
cmd := NewStringCmd("echo", message)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Ping() *StatusCmd {
func (c cmdable) Ping() *StatusCmd {
cmd := NewStatusCmd("ping")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Wait(numSlaves int, timeout time.Duration) *IntCmd {
func (c cmdable) Wait(numSlaves int, timeout time.Duration) *IntCmd {
cmd := NewIntCmd("wait", numSlaves, int(timeout/time.Millisecond))
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Quit() *StatusCmd {
func (c cmdable) Quit() *StatusCmd {
panic("not implemented")
}
func (c *statefulCmdable) Select(index int) *StatusCmd {
func (c statefulCmdable) Select(index int) *StatusCmd {
cmd := NewStatusCmd("select", index)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *statefulCmdable) SwapDB(index1, index2 int) *StatusCmd {
func (c statefulCmdable) SwapDB(index1, index2 int) *StatusCmd {
cmd := NewStatusCmd("swapdb", index1, index2)
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) Command() *CommandsInfoCmd {
func (c cmdable) Command() *CommandsInfoCmd {
cmd := NewCommandsInfoCmd("command")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Del(keys ...string) *IntCmd {
func (c cmdable) Del(keys ...string) *IntCmd {
args := make([]interface{}, 1+len(keys))
args[0] = "del"
for i, key := range keys {
args[1+i] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Unlink(keys ...string) *IntCmd {
func (c cmdable) Unlink(keys ...string) *IntCmd {
args := make([]interface{}, 1+len(keys))
args[0] = "unlink"
for i, key := range keys {
args[1+i] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Dump(key string) *StringCmd {
func (c cmdable) Dump(key string) *StringCmd {
cmd := NewStringCmd("dump", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Exists(keys ...string) *IntCmd {
func (c cmdable) Exists(keys ...string) *IntCmd {
args := make([]interface{}, 1+len(keys))
args[0] = "exists"
for i, key := range keys {
args[1+i] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Expire(key string, expiration time.Duration) *BoolCmd {
func (c cmdable) Expire(key string, expiration time.Duration) *BoolCmd {
cmd := NewBoolCmd("expire", key, formatSec(expiration))
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ExpireAt(key string, tm time.Time) *BoolCmd {
func (c cmdable) ExpireAt(key string, tm time.Time) *BoolCmd {
cmd := NewBoolCmd("expireat", key, tm.Unix())
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Keys(pattern string) *StringSliceCmd {
func (c cmdable) Keys(pattern string) *StringSliceCmd {
cmd := NewStringSliceCmd("keys", pattern)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Migrate(host, port, key string, db int64, timeout time.Duration) *StatusCmd {
func (c cmdable) Migrate(host, port, key string, db int, timeout time.Duration) *StatusCmd {
cmd := NewStatusCmd(
"migrate",
host,
@ -441,92 +443,92 @@ func (c *cmdable) Migrate(host, port, key string, db int64, timeout time.Duratio
formatMs(timeout),
)
cmd.setReadTimeout(timeout)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Move(key string, db int64) *BoolCmd {
func (c cmdable) Move(key string, db int) *BoolCmd {
cmd := NewBoolCmd("move", key, db)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ObjectRefCount(key string) *IntCmd {
func (c cmdable) ObjectRefCount(key string) *IntCmd {
cmd := NewIntCmd("object", "refcount", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ObjectEncoding(key string) *StringCmd {
func (c cmdable) ObjectEncoding(key string) *StringCmd {
cmd := NewStringCmd("object", "encoding", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ObjectIdleTime(key string) *DurationCmd {
func (c cmdable) ObjectIdleTime(key string) *DurationCmd {
cmd := NewDurationCmd(time.Second, "object", "idletime", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Persist(key string) *BoolCmd {
func (c cmdable) Persist(key string) *BoolCmd {
cmd := NewBoolCmd("persist", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) PExpire(key string, expiration time.Duration) *BoolCmd {
func (c cmdable) PExpire(key string, expiration time.Duration) *BoolCmd {
cmd := NewBoolCmd("pexpire", key, formatMs(expiration))
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) PExpireAt(key string, tm time.Time) *BoolCmd {
func (c cmdable) PExpireAt(key string, tm time.Time) *BoolCmd {
cmd := NewBoolCmd(
"pexpireat",
key,
tm.UnixNano()/int64(time.Millisecond),
)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) PTTL(key string) *DurationCmd {
func (c cmdable) PTTL(key string) *DurationCmd {
cmd := NewDurationCmd(time.Millisecond, "pttl", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) RandomKey() *StringCmd {
func (c cmdable) RandomKey() *StringCmd {
cmd := NewStringCmd("randomkey")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Rename(key, newkey string) *StatusCmd {
func (c cmdable) Rename(key, newkey string) *StatusCmd {
cmd := NewStatusCmd("rename", key, newkey)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) RenameNX(key, newkey string) *BoolCmd {
func (c cmdable) RenameNX(key, newkey string) *BoolCmd {
cmd := NewBoolCmd("renamenx", key, newkey)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Restore(key string, ttl time.Duration, value string) *StatusCmd {
func (c cmdable) Restore(key string, ttl time.Duration, value string) *StatusCmd {
cmd := NewStatusCmd(
"restore",
key,
formatMs(ttl),
value,
)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) RestoreReplace(key string, ttl time.Duration, value string) *StatusCmd {
func (c cmdable) RestoreReplace(key string, ttl time.Duration, value string) *StatusCmd {
cmd := NewStatusCmd(
"restore",
key,
@ -534,7 +536,7 @@ func (c *cmdable) RestoreReplace(key string, ttl time.Duration, value string) *S
value,
"replace",
)
c.process(cmd)
_ = c(cmd)
return cmd
}
@ -566,52 +568,52 @@ func (sort *Sort) args(key string) []interface{} {
return args
}
func (c *cmdable) Sort(key string, sort *Sort) *StringSliceCmd {
func (c cmdable) Sort(key string, sort *Sort) *StringSliceCmd {
cmd := NewStringSliceCmd(sort.args(key)...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SortStore(key, store string, sort *Sort) *IntCmd {
func (c cmdable) SortStore(key, store string, sort *Sort) *IntCmd {
args := sort.args(key)
if store != "" {
args = append(args, "store", store)
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SortInterfaces(key string, sort *Sort) *SliceCmd {
func (c cmdable) SortInterfaces(key string, sort *Sort) *SliceCmd {
cmd := NewSliceCmd(sort.args(key)...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Touch(keys ...string) *IntCmd {
func (c cmdable) Touch(keys ...string) *IntCmd {
args := make([]interface{}, len(keys)+1)
args[0] = "touch"
for i, key := range keys {
args[i+1] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) TTL(key string) *DurationCmd {
func (c cmdable) TTL(key string) *DurationCmd {
cmd := NewDurationCmd(time.Second, "ttl", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Type(key string) *StatusCmd {
func (c cmdable) Type(key string) *StatusCmd {
cmd := NewStatusCmd("type", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Scan(cursor uint64, match string, count int64) *ScanCmd {
func (c cmdable) Scan(cursor uint64, match string, count int64) *ScanCmd {
args := []interface{}{"scan", cursor}
if match != "" {
args = append(args, "match", match)
@ -619,12 +621,12 @@ func (c *cmdable) Scan(cursor uint64, match string, count int64) *ScanCmd {
if count > 0 {
args = append(args, "count", count)
}
cmd := NewScanCmd(c.process, args...)
c.process(cmd)
cmd := NewScanCmd(c, args...)
_ = c(cmd)
return cmd
}
func (c *cmdable) SScan(key string, cursor uint64, match string, count int64) *ScanCmd {
func (c cmdable) SScan(key string, cursor uint64, match string, count int64) *ScanCmd {
args := []interface{}{"sscan", key, cursor}
if match != "" {
args = append(args, "match", match)
@ -632,12 +634,12 @@ func (c *cmdable) SScan(key string, cursor uint64, match string, count int64) *S
if count > 0 {
args = append(args, "count", count)
}
cmd := NewScanCmd(c.process, args...)
c.process(cmd)
cmd := NewScanCmd(c, args...)
_ = c(cmd)
return cmd
}
func (c *cmdable) HScan(key string, cursor uint64, match string, count int64) *ScanCmd {
func (c cmdable) HScan(key string, cursor uint64, match string, count int64) *ScanCmd {
args := []interface{}{"hscan", key, cursor}
if match != "" {
args = append(args, "match", match)
@ -645,12 +647,12 @@ func (c *cmdable) HScan(key string, cursor uint64, match string, count int64) *S
if count > 0 {
args = append(args, "count", count)
}
cmd := NewScanCmd(c.process, args...)
c.process(cmd)
cmd := NewScanCmd(c, args...)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZScan(key string, cursor uint64, match string, count int64) *ScanCmd {
func (c cmdable) ZScan(key string, cursor uint64, match string, count int64) *ScanCmd {
args := []interface{}{"zscan", key, cursor}
if match != "" {
args = append(args, "match", match)
@ -658,16 +660,16 @@ func (c *cmdable) ZScan(key string, cursor uint64, match string, count int64) *S
if count > 0 {
args = append(args, "count", count)
}
cmd := NewScanCmd(c.process, args...)
c.process(cmd)
cmd := NewScanCmd(c, args...)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) Append(key, value string) *IntCmd {
func (c cmdable) Append(key, value string) *IntCmd {
cmd := NewIntCmd("append", key, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
@ -675,7 +677,7 @@ type BitCount struct {
Start, End int64
}
func (c *cmdable) BitCount(key string, bitCount *BitCount) *IntCmd {
func (c cmdable) BitCount(key string, bitCount *BitCount) *IntCmd {
args := []interface{}{"bitcount", key}
if bitCount != nil {
args = append(
@ -685,11 +687,11 @@ func (c *cmdable) BitCount(key string, bitCount *BitCount) *IntCmd {
)
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) bitOp(op, destKey string, keys ...string) *IntCmd {
func (c cmdable) bitOp(op, destKey string, keys ...string) *IntCmd {
args := make([]interface{}, 3+len(keys))
args[0] = "bitop"
args[1] = op
@ -698,27 +700,27 @@ func (c *cmdable) bitOp(op, destKey string, keys ...string) *IntCmd {
args[3+i] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) BitOpAnd(destKey string, keys ...string) *IntCmd {
func (c cmdable) BitOpAnd(destKey string, keys ...string) *IntCmd {
return c.bitOp("and", destKey, keys...)
}
func (c *cmdable) BitOpOr(destKey string, keys ...string) *IntCmd {
func (c cmdable) BitOpOr(destKey string, keys ...string) *IntCmd {
return c.bitOp("or", destKey, keys...)
}
func (c *cmdable) BitOpXor(destKey string, keys ...string) *IntCmd {
func (c cmdable) BitOpXor(destKey string, keys ...string) *IntCmd {
return c.bitOp("xor", destKey, keys...)
}
func (c *cmdable) BitOpNot(destKey string, key string) *IntCmd {
func (c cmdable) BitOpNot(destKey string, key string) *IntCmd {
return c.bitOp("not", destKey, key)
}
func (c *cmdable) BitPos(key string, bit int64, pos ...int64) *IntCmd {
func (c cmdable) BitPos(key string, bit int64, pos ...int64) *IntCmd {
args := make([]interface{}, 3+len(pos))
args[0] = "bitpos"
args[1] = key
@ -734,91 +736,109 @@ func (c *cmdable) BitPos(key string, bit int64, pos ...int64) *IntCmd {
panic("too many arguments")
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Decr(key string) *IntCmd {
func (c cmdable) BitField(key string, args ...interface{}) *IntSliceCmd {
a := make([]interface{}, 0, 2+len(args))
a = append(a, "bitfield")
a = append(a, key)
a = append(a, args...)
cmd := NewIntSliceCmd(a...)
_ = c(cmd)
return cmd
}
func (c cmdable) Decr(key string) *IntCmd {
cmd := NewIntCmd("decr", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) DecrBy(key string, decrement int64) *IntCmd {
func (c cmdable) DecrBy(key string, decrement int64) *IntCmd {
cmd := NewIntCmd("decrby", key, decrement)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `GET key` command. It returns redis.Nil error when key does not exist.
func (c *cmdable) Get(key string) *StringCmd {
func (c cmdable) Get(key string) *StringCmd {
cmd := NewStringCmd("get", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) GetBit(key string, offset int64) *IntCmd {
func (c cmdable) GetBit(key string, offset int64) *IntCmd {
cmd := NewIntCmd("getbit", key, offset)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) GetRange(key string, start, end int64) *StringCmd {
func (c cmdable) GetRange(key string, start, end int64) *StringCmd {
cmd := NewStringCmd("getrange", key, start, end)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) GetSet(key string, value interface{}) *StringCmd {
func (c cmdable) GetSet(key string, value interface{}) *StringCmd {
cmd := NewStringCmd("getset", key, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Incr(key string) *IntCmd {
func (c cmdable) Incr(key string) *IntCmd {
cmd := NewIntCmd("incr", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) IncrBy(key string, value int64) *IntCmd {
func (c cmdable) IncrBy(key string, value int64) *IntCmd {
cmd := NewIntCmd("incrby", key, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) IncrByFloat(key string, value float64) *FloatCmd {
func (c cmdable) IncrByFloat(key string, value float64) *FloatCmd {
cmd := NewFloatCmd("incrbyfloat", key, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) MGet(keys ...string) *SliceCmd {
func (c cmdable) MGet(keys ...string) *SliceCmd {
args := make([]interface{}, 1+len(keys))
args[0] = "mget"
for i, key := range keys {
args[1+i] = key
}
cmd := NewSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) MSet(pairs ...interface{}) *StatusCmd {
args := make([]interface{}, 1, 1+len(pairs))
// MSet is like Set but accepts multiple values:
// - MSet("key1", "value1", "key2", "value2")
// - MSet([]string{"key1", "value1", "key2", "value2"})
// - MSet(map[string]interface{}{"key1": "value1", "key2": "value2"})
func (c cmdable) MSet(values ...interface{}) *StatusCmd {
args := make([]interface{}, 1, 1+len(values))
args[0] = "mset"
args = appendArgs(args, pairs)
args = appendArgs(args, values)
cmd := NewStatusCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) MSetNX(pairs ...interface{}) *BoolCmd {
args := make([]interface{}, 1, 1+len(pairs))
// MSetNX is like SetNX but accepts multiple values:
// - MSetNX("key1", "value1", "key2", "value2")
// - MSetNX([]string{"key1", "value1", "key2", "value2"})
// - MSetNX(map[string]interface{}{"key1": "value1", "key2": "value2"})
func (c cmdable) MSetNX(values ...interface{}) *BoolCmd {
args := make([]interface{}, 1, 1+len(values))
args[0] = "msetnx"
args = appendArgs(args, pairs)
args = appendArgs(args, values)
cmd := NewBoolCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
@ -826,8 +846,8 @@ func (c *cmdable) MSetNX(pairs ...interface{}) *BoolCmd {
//
// Use expiration for `SETEX`-like behavior.
// Zero expiration means the key has no expiration time.
func (c *cmdable) Set(key string, value interface{}, expiration time.Duration) *StatusCmd {
args := make([]interface{}, 3, 4)
func (c cmdable) Set(key string, value interface{}, expiration time.Duration) *StatusCmd {
args := make([]interface{}, 3, 5)
args[0] = "set"
args[1] = key
args[2] = value
@ -839,25 +859,25 @@ func (c *cmdable) Set(key string, value interface{}, expiration time.Duration) *
}
}
cmd := NewStatusCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SetBit(key string, offset int64, value int) *IntCmd {
func (c cmdable) SetBit(key string, offset int64, value int) *IntCmd {
cmd := NewIntCmd(
"setbit",
key,
offset,
value,
)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `SET key value [expiration] NX` command.
//
// Zero expiration means the key has no expiration time.
func (c *cmdable) SetNX(key string, value interface{}, expiration time.Duration) *BoolCmd {
func (c cmdable) SetNX(key string, value interface{}, expiration time.Duration) *BoolCmd {
var cmd *BoolCmd
if expiration == 0 {
// Use old `SETNX` to support old Redis versions.
@ -869,14 +889,14 @@ func (c *cmdable) SetNX(key string, value interface{}, expiration time.Duration)
cmd = NewBoolCmd("set", key, value, "ex", formatSec(expiration), "nx")
}
}
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `SET key value [expiration] XX` command.
//
// Zero expiration means the key has no expiration time.
func (c *cmdable) SetXX(key string, value interface{}, expiration time.Duration) *BoolCmd {
func (c cmdable) SetXX(key string, value interface{}, expiration time.Duration) *BoolCmd {
var cmd *BoolCmd
if expiration == 0 {
cmd = NewBoolCmd("set", key, value, "xx")
@ -887,25 +907,25 @@ func (c *cmdable) SetXX(key string, value interface{}, expiration time.Duration)
cmd = NewBoolCmd("set", key, value, "ex", formatSec(expiration), "xx")
}
}
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SetRange(key string, offset int64, value string) *IntCmd {
func (c cmdable) SetRange(key string, offset int64, value string) *IntCmd {
cmd := NewIntCmd("setrange", key, offset, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) StrLen(key string) *IntCmd {
func (c cmdable) StrLen(key string) *IntCmd {
cmd := NewIntCmd("strlen", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) HDel(key string, fields ...string) *IntCmd {
func (c cmdable) HDel(key string, fields ...string) *IntCmd {
args := make([]interface{}, 2+len(fields))
args[0] = "hdel"
args[1] = key
@ -913,53 +933,55 @@ func (c *cmdable) HDel(key string, fields ...string) *IntCmd {
args[2+i] = field
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HExists(key, field string) *BoolCmd {
func (c cmdable) HExists(key, field string) *BoolCmd {
cmd := NewBoolCmd("hexists", key, field)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HGet(key, field string) *StringCmd {
func (c cmdable) HGet(key, field string) *StringCmd {
cmd := NewStringCmd("hget", key, field)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HGetAll(key string) *StringStringMapCmd {
func (c cmdable) HGetAll(key string) *StringStringMapCmd {
cmd := NewStringStringMapCmd("hgetall", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HIncrBy(key, field string, incr int64) *IntCmd {
func (c cmdable) HIncrBy(key, field string, incr int64) *IntCmd {
cmd := NewIntCmd("hincrby", key, field, incr)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HIncrByFloat(key, field string, incr float64) *FloatCmd {
func (c cmdable) HIncrByFloat(key, field string, incr float64) *FloatCmd {
cmd := NewFloatCmd("hincrbyfloat", key, field, incr)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HKeys(key string) *StringSliceCmd {
func (c cmdable) HKeys(key string) *StringSliceCmd {
cmd := NewStringSliceCmd("hkeys", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HLen(key string) *IntCmd {
func (c cmdable) HLen(key string) *IntCmd {
cmd := NewIntCmd("hlen", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HMGet(key string, fields ...string) *SliceCmd {
// HMGet returns the values for the specified fields in the hash stored at key.
// It returns an interface{} to distinguish between empty string and nil value.
func (c cmdable) HMGet(key string, fields ...string) *SliceCmd {
args := make([]interface{}, 2+len(fields))
args[0] = "hmget"
args[1] = key
@ -967,46 +989,52 @@ func (c *cmdable) HMGet(key string, fields ...string) *SliceCmd {
args[2+i] = field
}
cmd := NewSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HMSet(key string, fields map[string]interface{}) *StatusCmd {
args := make([]interface{}, 2+len(fields)*2)
// HSet accepts values in following formats:
// - HMSet("myhash", "key1", "value1", "key2", "value2")
// - HMSet("myhash", []string{"key1", "value1", "key2", "value2"})
// - HMSet("myhash", map[string]interface{}{"key1": "value1", "key2": "value2"})
//
// Note that it requires Redis v4 for multiple field/value pairs support.
func (c cmdable) HSet(key string, values ...interface{}) *IntCmd {
args := make([]interface{}, 2, 2+len(values))
args[0] = "hset"
args[1] = key
args = appendArgs(args, values)
cmd := NewIntCmd(args...)
_ = c(cmd)
return cmd
}
// HMSet is a deprecated version of HSet left for compatibility with Redis 3.
func (c cmdable) HMSet(key string, values ...interface{}) *BoolCmd {
args := make([]interface{}, 2, 2+len(values))
args[0] = "hmset"
args[1] = key
i := 2
for k, v := range fields {
args[i] = k
args[i+1] = v
i += 2
}
cmd := NewStatusCmd(args...)
c.process(cmd)
args = appendArgs(args, values)
cmd := NewBoolCmd(args...)
_ = c(cmd)
return cmd
}
func (c *cmdable) HSet(key, field string, value interface{}) *BoolCmd {
cmd := NewBoolCmd("hset", key, field, value)
c.process(cmd)
return cmd
}
func (c *cmdable) HSetNX(key, field string, value interface{}) *BoolCmd {
func (c cmdable) HSetNX(key, field string, value interface{}) *BoolCmd {
cmd := NewBoolCmd("hsetnx", key, field, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) HVals(key string) *StringSliceCmd {
func (c cmdable) HVals(key string) *StringSliceCmd {
cmd := NewStringSliceCmd("hvals", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) BLPop(timeout time.Duration, keys ...string) *StringSliceCmd {
func (c cmdable) BLPop(timeout time.Duration, keys ...string) *StringSliceCmd {
args := make([]interface{}, 1+len(keys)+1)
args[0] = "blpop"
for i, key := range keys {
@ -1015,11 +1043,11 @@ func (c *cmdable) BLPop(timeout time.Duration, keys ...string) *StringSliceCmd {
args[len(args)-1] = formatSec(timeout)
cmd := NewStringSliceCmd(args...)
cmd.setReadTimeout(timeout)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) BRPop(timeout time.Duration, keys ...string) *StringSliceCmd {
func (c cmdable) BRPop(timeout time.Duration, keys ...string) *StringSliceCmd {
args := make([]interface{}, 1+len(keys)+1)
args[0] = "brpop"
for i, key := range keys {
@ -1028,11 +1056,11 @@ func (c *cmdable) BRPop(timeout time.Duration, keys ...string) *StringSliceCmd {
args[len(keys)+1] = formatSec(timeout)
cmd := NewStringSliceCmd(args...)
cmd.setReadTimeout(timeout)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) BRPopLPush(source, destination string, timeout time.Duration) *StringCmd {
func (c cmdable) BRPopLPush(source, destination string, timeout time.Duration) *StringCmd {
cmd := NewStringCmd(
"brpoplpush",
source,
@ -1040,154 +1068,162 @@ func (c *cmdable) BRPopLPush(source, destination string, timeout time.Duration)
formatSec(timeout),
)
cmd.setReadTimeout(timeout)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LIndex(key string, index int64) *StringCmd {
func (c cmdable) LIndex(key string, index int64) *StringCmd {
cmd := NewStringCmd("lindex", key, index)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LInsert(key, op string, pivot, value interface{}) *IntCmd {
func (c cmdable) LInsert(key, op string, pivot, value interface{}) *IntCmd {
cmd := NewIntCmd("linsert", key, op, pivot, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LInsertBefore(key string, pivot, value interface{}) *IntCmd {
func (c cmdable) LInsertBefore(key string, pivot, value interface{}) *IntCmd {
cmd := NewIntCmd("linsert", key, "before", pivot, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LInsertAfter(key string, pivot, value interface{}) *IntCmd {
func (c cmdable) LInsertAfter(key string, pivot, value interface{}) *IntCmd {
cmd := NewIntCmd("linsert", key, "after", pivot, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LLen(key string) *IntCmd {
func (c cmdable) LLen(key string) *IntCmd {
cmd := NewIntCmd("llen", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LPop(key string) *StringCmd {
func (c cmdable) LPop(key string) *StringCmd {
cmd := NewStringCmd("lpop", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LPush(key string, values ...interface{}) *IntCmd {
func (c cmdable) LPush(key string, values ...interface{}) *IntCmd {
args := make([]interface{}, 2, 2+len(values))
args[0] = "lpush"
args[1] = key
args = appendArgs(args, values)
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LPushX(key string, value interface{}) *IntCmd {
cmd := NewIntCmd("lpushx", key, value)
c.process(cmd)
func (c cmdable) LPushX(key string, values ...interface{}) *IntCmd {
args := make([]interface{}, 2, 2+len(values))
args[0] = "lpushx"
args[1] = key
args = appendArgs(args, values)
cmd := NewIntCmd(args...)
_ = c(cmd)
return cmd
}
func (c *cmdable) LRange(key string, start, stop int64) *StringSliceCmd {
func (c cmdable) LRange(key string, start, stop int64) *StringSliceCmd {
cmd := NewStringSliceCmd(
"lrange",
key,
start,
stop,
)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LRem(key string, count int64, value interface{}) *IntCmd {
func (c cmdable) LRem(key string, count int64, value interface{}) *IntCmd {
cmd := NewIntCmd("lrem", key, count, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LSet(key string, index int64, value interface{}) *StatusCmd {
func (c cmdable) LSet(key string, index int64, value interface{}) *StatusCmd {
cmd := NewStatusCmd("lset", key, index, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LTrim(key string, start, stop int64) *StatusCmd {
func (c cmdable) LTrim(key string, start, stop int64) *StatusCmd {
cmd := NewStatusCmd(
"ltrim",
key,
start,
stop,
)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) RPop(key string) *StringCmd {
func (c cmdable) RPop(key string) *StringCmd {
cmd := NewStringCmd("rpop", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) RPopLPush(source, destination string) *StringCmd {
func (c cmdable) RPopLPush(source, destination string) *StringCmd {
cmd := NewStringCmd("rpoplpush", source, destination)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) RPush(key string, values ...interface{}) *IntCmd {
func (c cmdable) RPush(key string, values ...interface{}) *IntCmd {
args := make([]interface{}, 2, 2+len(values))
args[0] = "rpush"
args[1] = key
args = appendArgs(args, values)
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) RPushX(key string, value interface{}) *IntCmd {
cmd := NewIntCmd("rpushx", key, value)
c.process(cmd)
func (c cmdable) RPushX(key string, values ...interface{}) *IntCmd {
args := make([]interface{}, 2, 2+len(values))
args[0] = "rpushx"
args[1] = key
args = appendArgs(args, values)
cmd := NewIntCmd(args...)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) SAdd(key string, members ...interface{}) *IntCmd {
func (c cmdable) SAdd(key string, members ...interface{}) *IntCmd {
args := make([]interface{}, 2, 2+len(members))
args[0] = "sadd"
args[1] = key
args = appendArgs(args, members)
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SCard(key string) *IntCmd {
func (c cmdable) SCard(key string) *IntCmd {
cmd := NewIntCmd("scard", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SDiff(keys ...string) *StringSliceCmd {
func (c cmdable) SDiff(keys ...string) *StringSliceCmd {
args := make([]interface{}, 1+len(keys))
args[0] = "sdiff"
for i, key := range keys {
args[1+i] = key
}
cmd := NewStringSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SDiffStore(destination string, keys ...string) *IntCmd {
func (c cmdable) SDiffStore(destination string, keys ...string) *IntCmd {
args := make([]interface{}, 2+len(keys))
args[0] = "sdiffstore"
args[1] = destination
@ -1195,22 +1231,22 @@ func (c *cmdable) SDiffStore(destination string, keys ...string) *IntCmd {
args[2+i] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SInter(keys ...string) *StringSliceCmd {
func (c cmdable) SInter(keys ...string) *StringSliceCmd {
args := make([]interface{}, 1+len(keys))
args[0] = "sinter"
for i, key := range keys {
args[1+i] = key
}
cmd := NewStringSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SInterStore(destination string, keys ...string) *IntCmd {
func (c cmdable) SInterStore(destination string, keys ...string) *IntCmd {
args := make([]interface{}, 2+len(keys))
args[0] = "sinterstore"
args[1] = destination
@ -1218,86 +1254,86 @@ func (c *cmdable) SInterStore(destination string, keys ...string) *IntCmd {
args[2+i] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SIsMember(key string, member interface{}) *BoolCmd {
func (c cmdable) SIsMember(key string, member interface{}) *BoolCmd {
cmd := NewBoolCmd("sismember", key, member)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `SMEMBERS key` command output as a slice
func (c *cmdable) SMembers(key string) *StringSliceCmd {
func (c cmdable) SMembers(key string) *StringSliceCmd {
cmd := NewStringSliceCmd("smembers", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `SMEMBERS key` command output as a map
func (c *cmdable) SMembersMap(key string) *StringStructMapCmd {
func (c cmdable) SMembersMap(key string) *StringStructMapCmd {
cmd := NewStringStructMapCmd("smembers", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SMove(source, destination string, member interface{}) *BoolCmd {
func (c cmdable) SMove(source, destination string, member interface{}) *BoolCmd {
cmd := NewBoolCmd("smove", source, destination, member)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `SPOP key` command.
func (c *cmdable) SPop(key string) *StringCmd {
func (c cmdable) SPop(key string) *StringCmd {
cmd := NewStringCmd("spop", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `SPOP key count` command.
func (c *cmdable) SPopN(key string, count int64) *StringSliceCmd {
func (c cmdable) SPopN(key string, count int64) *StringSliceCmd {
cmd := NewStringSliceCmd("spop", key, count)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `SRANDMEMBER key` command.
func (c *cmdable) SRandMember(key string) *StringCmd {
func (c cmdable) SRandMember(key string) *StringCmd {
cmd := NewStringCmd("srandmember", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `SRANDMEMBER key count` command.
func (c *cmdable) SRandMemberN(key string, count int64) *StringSliceCmd {
func (c cmdable) SRandMemberN(key string, count int64) *StringSliceCmd {
cmd := NewStringSliceCmd("srandmember", key, count)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SRem(key string, members ...interface{}) *IntCmd {
func (c cmdable) SRem(key string, members ...interface{}) *IntCmd {
args := make([]interface{}, 2, 2+len(members))
args[0] = "srem"
args[1] = key
args = appendArgs(args, members)
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SUnion(keys ...string) *StringSliceCmd {
func (c cmdable) SUnion(keys ...string) *StringSliceCmd {
args := make([]interface{}, 1+len(keys))
args[0] = "sunion"
for i, key := range keys {
args[1+i] = key
}
cmd := NewStringSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SUnionStore(destination string, keys ...string) *IntCmd {
func (c cmdable) SUnionStore(destination string, keys ...string) *IntCmd {
args := make([]interface{}, 2+len(keys))
args[0] = "sunionstore"
args[1] = destination
@ -1305,7 +1341,7 @@ func (c *cmdable) SUnionStore(destination string, keys ...string) *IntCmd {
args[2+i] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
@ -1319,7 +1355,7 @@ type XAddArgs struct {
Values map[string]interface{}
}
func (c *cmdable) XAdd(a *XAddArgs) *StringCmd {
func (c cmdable) XAdd(a *XAddArgs) *StringCmd {
args := make([]interface{}, 0, 6+len(a.Values)*2)
args = append(args, "xadd")
args = append(args, a.Stream)
@ -1339,57 +1375,57 @@ func (c *cmdable) XAdd(a *XAddArgs) *StringCmd {
}
cmd := NewStringCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XDel(stream string, ids ...string) *IntCmd {
func (c cmdable) XDel(stream string, ids ...string) *IntCmd {
args := []interface{}{"xdel", stream}
for _, id := range ids {
args = append(args, id)
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XLen(stream string) *IntCmd {
func (c cmdable) XLen(stream string) *IntCmd {
cmd := NewIntCmd("xlen", stream)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XRange(stream, start, stop string) *XMessageSliceCmd {
func (c cmdable) XRange(stream, start, stop string) *XMessageSliceCmd {
cmd := NewXMessageSliceCmd("xrange", stream, start, stop)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XRangeN(stream, start, stop string, count int64) *XMessageSliceCmd {
func (c cmdable) XRangeN(stream, start, stop string, count int64) *XMessageSliceCmd {
cmd := NewXMessageSliceCmd("xrange", stream, start, stop, "count", count)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XRevRange(stream, start, stop string) *XMessageSliceCmd {
func (c cmdable) XRevRange(stream, start, stop string) *XMessageSliceCmd {
cmd := NewXMessageSliceCmd("xrevrange", stream, start, stop)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XRevRangeN(stream, start, stop string, count int64) *XMessageSliceCmd {
func (c cmdable) XRevRangeN(stream, start, stop string, count int64) *XMessageSliceCmd {
cmd := NewXMessageSliceCmd("xrevrange", stream, start, stop, "count", count)
c.process(cmd)
_ = c(cmd)
return cmd
}
type XReadArgs struct {
Streams []string
Streams []string // list of streams and ids, e.g. stream1 stream2 id1 id2
Count int64
Block time.Duration
}
func (c *cmdable) XRead(a *XReadArgs) *XStreamSliceCmd {
func (c cmdable) XRead(a *XReadArgs) *XStreamSliceCmd {
args := make([]interface{}, 0, 5+len(a.Streams))
args = append(args, "xread")
if a.Count > 0 {
@ -1400,6 +1436,7 @@ func (c *cmdable) XRead(a *XReadArgs) *XStreamSliceCmd {
args = append(args, "block")
args = append(args, int64(a.Block/time.Millisecond))
}
args = append(args, "streams")
for _, s := range a.Streams {
args = append(args, s)
@ -1409,58 +1446,57 @@ func (c *cmdable) XRead(a *XReadArgs) *XStreamSliceCmd {
if a.Block >= 0 {
cmd.setReadTimeout(a.Block)
}
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XReadStreams(streams ...string) *XStreamSliceCmd {
func (c cmdable) XReadStreams(streams ...string) *XStreamSliceCmd {
return c.XRead(&XReadArgs{
Streams: streams,
Block: -1,
})
}
func (c *cmdable) XGroupCreate(stream, group, start string) *StatusCmd {
func (c cmdable) XGroupCreate(stream, group, start string) *StatusCmd {
cmd := NewStatusCmd("xgroup", "create", stream, group, start)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XGroupCreateMkStream(stream, group, start string) *StatusCmd {
func (c cmdable) XGroupCreateMkStream(stream, group, start string) *StatusCmd {
cmd := NewStatusCmd("xgroup", "create", stream, group, start, "mkstream")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XGroupSetID(stream, group, start string) *StatusCmd {
func (c cmdable) XGroupSetID(stream, group, start string) *StatusCmd {
cmd := NewStatusCmd("xgroup", "setid", stream, group, start)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XGroupDestroy(stream, group string) *IntCmd {
func (c cmdable) XGroupDestroy(stream, group string) *IntCmd {
cmd := NewIntCmd("xgroup", "destroy", stream, group)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XGroupDelConsumer(stream, group, consumer string) *IntCmd {
func (c cmdable) XGroupDelConsumer(stream, group, consumer string) *IntCmd {
cmd := NewIntCmd("xgroup", "delconsumer", stream, group, consumer)
c.process(cmd)
_ = c(cmd)
return cmd
}
type XReadGroupArgs struct {
Group string
Consumer string
// List of streams and ids.
Streams []string
Count int64
Block time.Duration
NoAck bool
Streams []string // list of streams and ids, e.g. stream1 stream2 id1 id2
Count int64
Block time.Duration
NoAck bool
}
func (c *cmdable) XReadGroup(a *XReadGroupArgs) *XStreamSliceCmd {
func (c cmdable) XReadGroup(a *XReadGroupArgs) *XStreamSliceCmd {
args := make([]interface{}, 0, 8+len(a.Streams))
args = append(args, "xreadgroup", "group", a.Group, a.Consumer)
if a.Count > 0 {
@ -1481,23 +1517,23 @@ func (c *cmdable) XReadGroup(a *XReadGroupArgs) *XStreamSliceCmd {
if a.Block >= 0 {
cmd.setReadTimeout(a.Block)
}
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XAck(stream, group string, ids ...string) *IntCmd {
func (c cmdable) XAck(stream, group string, ids ...string) *IntCmd {
args := []interface{}{"xack", stream, group}
for _, id := range ids {
args = append(args, id)
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XPending(stream, group string) *XPendingCmd {
func (c cmdable) XPending(stream, group string) *XPendingCmd {
cmd := NewXPendingCmd("xpending", stream, group)
c.process(cmd)
_ = c(cmd)
return cmd
}
@ -1510,14 +1546,14 @@ type XPendingExtArgs struct {
Consumer string
}
func (c *cmdable) XPendingExt(a *XPendingExtArgs) *XPendingExtCmd {
func (c cmdable) XPendingExt(a *XPendingExtArgs) *XPendingExtCmd {
args := make([]interface{}, 0, 7)
args = append(args, "xpending", a.Stream, a.Group, a.Start, a.End, a.Count)
if a.Consumer != "" {
args = append(args, a.Consumer)
}
cmd := NewXPendingExtCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
@ -1529,18 +1565,18 @@ type XClaimArgs struct {
Messages []string
}
func (c *cmdable) XClaim(a *XClaimArgs) *XMessageSliceCmd {
func (c cmdable) XClaim(a *XClaimArgs) *XMessageSliceCmd {
args := xClaimArgs(a)
cmd := NewXMessageSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XClaimJustID(a *XClaimArgs) *StringSliceCmd {
func (c cmdable) XClaimJustID(a *XClaimArgs) *StringSliceCmd {
args := xClaimArgs(a)
args = append(args, "justid")
cmd := NewStringSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
@ -1557,15 +1593,21 @@ func xClaimArgs(a *XClaimArgs) []interface{} {
return args
}
func (c *cmdable) XTrim(key string, maxLen int64) *IntCmd {
func (c cmdable) XTrim(key string, maxLen int64) *IntCmd {
cmd := NewIntCmd("xtrim", key, "maxlen", maxLen)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) XTrimApprox(key string, maxLen int64) *IntCmd {
func (c cmdable) XTrimApprox(key string, maxLen int64) *IntCmd {
cmd := NewIntCmd("xtrim", key, "maxlen", "~", maxLen)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c cmdable) XInfoGroups(key string) *XInfoGroupsCmd {
cmd := NewXInfoGroupsCmd(key)
_ = c(cmd)
return cmd
}
@ -1585,13 +1627,14 @@ type ZWithKey struct {
// ZStore is used as an arg to ZInterStore and ZUnionStore.
type ZStore struct {
Keys []string
Weights []float64
// Can be SUM, MIN or MAX.
Aggregate string
}
// Redis `BZPOPMAX key [key ...] timeout` command.
func (c *cmdable) BZPopMax(timeout time.Duration, keys ...string) *ZWithKeyCmd {
func (c cmdable) BZPopMax(timeout time.Duration, keys ...string) *ZWithKeyCmd {
args := make([]interface{}, 1+len(keys)+1)
args[0] = "bzpopmax"
for i, key := range keys {
@ -1600,12 +1643,12 @@ func (c *cmdable) BZPopMax(timeout time.Duration, keys ...string) *ZWithKeyCmd {
args[len(args)-1] = formatSec(timeout)
cmd := NewZWithKeyCmd(args...)
cmd.setReadTimeout(timeout)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `BZPOPMIN key [key ...] timeout` command.
func (c *cmdable) BZPopMin(timeout time.Duration, keys ...string) *ZWithKeyCmd {
func (c cmdable) BZPopMin(timeout time.Duration, keys ...string) *ZWithKeyCmd {
args := make([]interface{}, 1+len(keys)+1)
args[0] = "bzpopmin"
for i, key := range keys {
@ -1614,22 +1657,22 @@ func (c *cmdable) BZPopMin(timeout time.Duration, keys ...string) *ZWithKeyCmd {
args[len(args)-1] = formatSec(timeout)
cmd := NewZWithKeyCmd(args...)
cmd.setReadTimeout(timeout)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) zAdd(a []interface{}, n int, members ...Z) *IntCmd {
func (c cmdable) zAdd(a []interface{}, n int, members ...*Z) *IntCmd {
for i, m := range members {
a[n+2*i] = m.Score
a[n+2*i+1] = m.Member
}
cmd := NewIntCmd(a...)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `ZADD key score member [score member ...]` command.
func (c *cmdable) ZAdd(key string, members ...Z) *IntCmd {
func (c cmdable) ZAdd(key string, members ...*Z) *IntCmd {
const n = 2
a := make([]interface{}, n+2*len(members))
a[0], a[1] = "zadd", key
@ -1637,7 +1680,7 @@ func (c *cmdable) ZAdd(key string, members ...Z) *IntCmd {
}
// Redis `ZADD key NX score member [score member ...]` command.
func (c *cmdable) ZAddNX(key string, members ...Z) *IntCmd {
func (c cmdable) ZAddNX(key string, members ...*Z) *IntCmd {
const n = 3
a := make([]interface{}, n+2*len(members))
a[0], a[1], a[2] = "zadd", key, "nx"
@ -1645,7 +1688,7 @@ func (c *cmdable) ZAddNX(key string, members ...Z) *IntCmd {
}
// Redis `ZADD key XX score member [score member ...]` command.
func (c *cmdable) ZAddXX(key string, members ...Z) *IntCmd {
func (c cmdable) ZAddXX(key string, members ...*Z) *IntCmd {
const n = 3
a := make([]interface{}, n+2*len(members))
a[0], a[1], a[2] = "zadd", key, "xx"
@ -1653,7 +1696,7 @@ func (c *cmdable) ZAddXX(key string, members ...Z) *IntCmd {
}
// Redis `ZADD key CH score member [score member ...]` command.
func (c *cmdable) ZAddCh(key string, members ...Z) *IntCmd {
func (c cmdable) ZAddCh(key string, members ...*Z) *IntCmd {
const n = 3
a := make([]interface{}, n+2*len(members))
a[0], a[1], a[2] = "zadd", key, "ch"
@ -1661,7 +1704,7 @@ func (c *cmdable) ZAddCh(key string, members ...Z) *IntCmd {
}
// Redis `ZADD key NX CH score member [score member ...]` command.
func (c *cmdable) ZAddNXCh(key string, members ...Z) *IntCmd {
func (c cmdable) ZAddNXCh(key string, members ...*Z) *IntCmd {
const n = 4
a := make([]interface{}, n+2*len(members))
a[0], a[1], a[2], a[3] = "zadd", key, "nx", "ch"
@ -1669,25 +1712,25 @@ func (c *cmdable) ZAddNXCh(key string, members ...Z) *IntCmd {
}
// Redis `ZADD key XX CH score member [score member ...]` command.
func (c *cmdable) ZAddXXCh(key string, members ...Z) *IntCmd {
func (c cmdable) ZAddXXCh(key string, members ...*Z) *IntCmd {
const n = 4
a := make([]interface{}, n+2*len(members))
a[0], a[1], a[2], a[3] = "zadd", key, "xx", "ch"
return c.zAdd(a, n, members...)
}
func (c *cmdable) zIncr(a []interface{}, n int, members ...Z) *FloatCmd {
func (c cmdable) zIncr(a []interface{}, n int, members ...*Z) *FloatCmd {
for i, m := range members {
a[n+2*i] = m.Score
a[n+2*i+1] = m.Member
}
cmd := NewFloatCmd(a...)
c.process(cmd)
_ = c(cmd)
return cmd
}
// Redis `ZADD key INCR score member` command.
func (c *cmdable) ZIncr(key string, member Z) *FloatCmd {
func (c cmdable) ZIncr(key string, member *Z) *FloatCmd {
const n = 3
a := make([]interface{}, n+2)
a[0], a[1], a[2] = "zadd", key, "incr"
@ -1695,7 +1738,7 @@ func (c *cmdable) ZIncr(key string, member Z) *FloatCmd {
}
// Redis `ZADD key NX INCR score member` command.
func (c *cmdable) ZIncrNX(key string, member Z) *FloatCmd {
func (c cmdable) ZIncrNX(key string, member *Z) *FloatCmd {
const n = 4
a := make([]interface{}, n+2)
a[0], a[1], a[2], a[3] = "zadd", key, "incr", "nx"
@ -1703,43 +1746,43 @@ func (c *cmdable) ZIncrNX(key string, member Z) *FloatCmd {
}
// Redis `ZADD key XX INCR score member` command.
func (c *cmdable) ZIncrXX(key string, member Z) *FloatCmd {
func (c cmdable) ZIncrXX(key string, member *Z) *FloatCmd {
const n = 4
a := make([]interface{}, n+2)
a[0], a[1], a[2], a[3] = "zadd", key, "incr", "xx"
return c.zIncr(a, n, member)
}
func (c *cmdable) ZCard(key string) *IntCmd {
func (c cmdable) ZCard(key string) *IntCmd {
cmd := NewIntCmd("zcard", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZCount(key, min, max string) *IntCmd {
func (c cmdable) ZCount(key, min, max string) *IntCmd {
cmd := NewIntCmd("zcount", key, min, max)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZLexCount(key, min, max string) *IntCmd {
func (c cmdable) ZLexCount(key, min, max string) *IntCmd {
cmd := NewIntCmd("zlexcount", key, min, max)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZIncrBy(key string, increment float64, member string) *FloatCmd {
func (c cmdable) ZIncrBy(key string, increment float64, member string) *FloatCmd {
cmd := NewFloatCmd("zincrby", key, increment, member)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZInterStore(destination string, store ZStore, keys ...string) *IntCmd {
args := make([]interface{}, 3+len(keys))
func (c cmdable) ZInterStore(destination string, store *ZStore) *IntCmd {
args := make([]interface{}, 3+len(store.Keys))
args[0] = "zinterstore"
args[1] = destination
args[2] = len(keys)
for i, key := range keys {
args[2] = len(store.Keys)
for i, key := range store.Keys {
args[3+i] = key
}
if len(store.Weights) > 0 {
@ -1752,11 +1795,11 @@ func (c *cmdable) ZInterStore(destination string, store ZStore, keys ...string)
args = append(args, "aggregate", store.Aggregate)
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZPopMax(key string, count ...int64) *ZSliceCmd {
func (c cmdable) ZPopMax(key string, count ...int64) *ZSliceCmd {
args := []interface{}{
"zpopmax",
key,
@ -1772,11 +1815,11 @@ func (c *cmdable) ZPopMax(key string, count ...int64) *ZSliceCmd {
}
cmd := NewZSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZPopMin(key string, count ...int64) *ZSliceCmd {
func (c cmdable) ZPopMin(key string, count ...int64) *ZSliceCmd {
args := []interface{}{
"zpopmin",
key,
@ -1792,11 +1835,11 @@ func (c *cmdable) ZPopMin(key string, count ...int64) *ZSliceCmd {
}
cmd := NewZSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) zRange(key string, start, stop int64, withScores bool) *StringSliceCmd {
func (c cmdable) zRange(key string, start, stop int64, withScores bool) *StringSliceCmd {
args := []interface{}{
"zrange",
key,
@ -1807,17 +1850,17 @@ func (c *cmdable) zRange(key string, start, stop int64, withScores bool) *String
args = append(args, "withscores")
}
cmd := NewStringSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRange(key string, start, stop int64) *StringSliceCmd {
func (c cmdable) ZRange(key string, start, stop int64) *StringSliceCmd {
return c.zRange(key, start, stop, false)
}
func (c *cmdable) ZRangeWithScores(key string, start, stop int64) *ZSliceCmd {
func (c cmdable) ZRangeWithScores(key string, start, stop int64) *ZSliceCmd {
cmd := NewZSliceCmd("zrange", key, start, stop, "withscores")
c.process(cmd)
_ = c(cmd)
return cmd
}
@ -1826,7 +1869,7 @@ type ZRangeBy struct {
Offset, Count int64
}
func (c *cmdable) zRangeBy(zcmd, key string, opt ZRangeBy, withScores bool) *StringSliceCmd {
func (c cmdable) zRangeBy(zcmd, key string, opt *ZRangeBy, withScores bool) *StringSliceCmd {
args := []interface{}{zcmd, key, opt.Min, opt.Max}
if withScores {
args = append(args, "withscores")
@ -1840,19 +1883,19 @@ func (c *cmdable) zRangeBy(zcmd, key string, opt ZRangeBy, withScores bool) *Str
)
}
cmd := NewStringSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRangeByScore(key string, opt ZRangeBy) *StringSliceCmd {
func (c cmdable) ZRangeByScore(key string, opt *ZRangeBy) *StringSliceCmd {
return c.zRangeBy("zrangebyscore", key, opt, false)
}
func (c *cmdable) ZRangeByLex(key string, opt ZRangeBy) *StringSliceCmd {
func (c cmdable) ZRangeByLex(key string, opt *ZRangeBy) *StringSliceCmd {
return c.zRangeBy("zrangebylex", key, opt, false)
}
func (c *cmdable) ZRangeByScoreWithScores(key string, opt ZRangeBy) *ZSliceCmd {
func (c cmdable) ZRangeByScoreWithScores(key string, opt *ZRangeBy) *ZSliceCmd {
args := []interface{}{"zrangebyscore", key, opt.Min, opt.Max, "withscores"}
if opt.Offset != 0 || opt.Count != 0 {
args = append(
@ -1863,62 +1906,62 @@ func (c *cmdable) ZRangeByScoreWithScores(key string, opt ZRangeBy) *ZSliceCmd {
)
}
cmd := NewZSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRank(key, member string) *IntCmd {
func (c cmdable) ZRank(key, member string) *IntCmd {
cmd := NewIntCmd("zrank", key, member)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRem(key string, members ...interface{}) *IntCmd {
func (c cmdable) ZRem(key string, members ...interface{}) *IntCmd {
args := make([]interface{}, 2, 2+len(members))
args[0] = "zrem"
args[1] = key
args = appendArgs(args, members)
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRemRangeByRank(key string, start, stop int64) *IntCmd {
func (c cmdable) ZRemRangeByRank(key string, start, stop int64) *IntCmd {
cmd := NewIntCmd(
"zremrangebyrank",
key,
start,
stop,
)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRemRangeByScore(key, min, max string) *IntCmd {
func (c cmdable) ZRemRangeByScore(key, min, max string) *IntCmd {
cmd := NewIntCmd("zremrangebyscore", key, min, max)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRemRangeByLex(key, min, max string) *IntCmd {
func (c cmdable) ZRemRangeByLex(key, min, max string) *IntCmd {
cmd := NewIntCmd("zremrangebylex", key, min, max)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRevRange(key string, start, stop int64) *StringSliceCmd {
func (c cmdable) ZRevRange(key string, start, stop int64) *StringSliceCmd {
cmd := NewStringSliceCmd("zrevrange", key, start, stop)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRevRangeWithScores(key string, start, stop int64) *ZSliceCmd {
func (c cmdable) ZRevRangeWithScores(key string, start, stop int64) *ZSliceCmd {
cmd := NewZSliceCmd("zrevrange", key, start, stop, "withscores")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) zRevRangeBy(zcmd, key string, opt ZRangeBy) *StringSliceCmd {
func (c cmdable) zRevRangeBy(zcmd, key string, opt *ZRangeBy) *StringSliceCmd {
args := []interface{}{zcmd, key, opt.Max, opt.Min}
if opt.Offset != 0 || opt.Count != 0 {
args = append(
@ -1929,19 +1972,19 @@ func (c *cmdable) zRevRangeBy(zcmd, key string, opt ZRangeBy) *StringSliceCmd {
)
}
cmd := NewStringSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRevRangeByScore(key string, opt ZRangeBy) *StringSliceCmd {
func (c cmdable) ZRevRangeByScore(key string, opt *ZRangeBy) *StringSliceCmd {
return c.zRevRangeBy("zrevrangebyscore", key, opt)
}
func (c *cmdable) ZRevRangeByLex(key string, opt ZRangeBy) *StringSliceCmd {
func (c cmdable) ZRevRangeByLex(key string, opt *ZRangeBy) *StringSliceCmd {
return c.zRevRangeBy("zrevrangebylex", key, opt)
}
func (c *cmdable) ZRevRangeByScoreWithScores(key string, opt ZRangeBy) *ZSliceCmd {
func (c cmdable) ZRevRangeByScoreWithScores(key string, opt *ZRangeBy) *ZSliceCmd {
args := []interface{}{"zrevrangebyscore", key, opt.Max, opt.Min, "withscores"}
if opt.Offset != 0 || opt.Count != 0 {
args = append(
@ -1952,28 +1995,28 @@ func (c *cmdable) ZRevRangeByScoreWithScores(key string, opt ZRangeBy) *ZSliceCm
)
}
cmd := NewZSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZRevRank(key, member string) *IntCmd {
func (c cmdable) ZRevRank(key, member string) *IntCmd {
cmd := NewIntCmd("zrevrank", key, member)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZScore(key, member string) *FloatCmd {
func (c cmdable) ZScore(key, member string) *FloatCmd {
cmd := NewFloatCmd("zscore", key, member)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ZUnionStore(dest string, store ZStore, keys ...string) *IntCmd {
args := make([]interface{}, 3+len(keys))
func (c cmdable) ZUnionStore(dest string, store *ZStore) *IntCmd {
args := make([]interface{}, 3+len(store.Keys))
args[0] = "zunionstore"
args[1] = dest
args[2] = len(keys)
for i, key := range keys {
args[2] = len(store.Keys)
for i, key := range store.Keys {
args[3+i] = key
}
if len(store.Weights) > 0 {
@ -1986,34 +2029,34 @@ func (c *cmdable) ZUnionStore(dest string, store ZStore, keys ...string) *IntCmd
args = append(args, "aggregate", store.Aggregate)
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) PFAdd(key string, els ...interface{}) *IntCmd {
func (c cmdable) PFAdd(key string, els ...interface{}) *IntCmd {
args := make([]interface{}, 2, 2+len(els))
args[0] = "pfadd"
args[1] = key
args = appendArgs(args, els)
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) PFCount(keys ...string) *IntCmd {
func (c cmdable) PFCount(keys ...string) *IntCmd {
args := make([]interface{}, 1+len(keys))
args[0] = "pfcount"
for i, key := range keys {
args[1+i] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) PFMerge(dest string, keys ...string) *StatusCmd {
func (c cmdable) PFMerge(dest string, keys ...string) *StatusCmd {
args := make([]interface{}, 2+len(keys))
args[0] = "pfmerge"
args[1] = dest
@ -2021,33 +2064,33 @@ func (c *cmdable) PFMerge(dest string, keys ...string) *StatusCmd {
args[2+i] = key
}
cmd := NewStatusCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) BgRewriteAOF() *StatusCmd {
func (c cmdable) BgRewriteAOF() *StatusCmd {
cmd := NewStatusCmd("bgrewriteaof")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) BgSave() *StatusCmd {
func (c cmdable) BgSave() *StatusCmd {
cmd := NewStatusCmd("bgsave")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClientKill(ipPort string) *StatusCmd {
func (c cmdable) ClientKill(ipPort string) *StatusCmd {
cmd := NewStatusCmd("client", "kill", ipPort)
c.process(cmd)
_ = c(cmd)
return cmd
}
// ClientKillByFilter is new style synx, while the ClientKill is old
// CLIENT KILL <option> [value] ... <option> [value]
func (c *cmdable) ClientKillByFilter(keys ...string) *IntCmd {
func (c cmdable) ClientKillByFilter(keys ...string) *IntCmd {
args := make([]interface{}, 2+len(keys))
args[0] = "client"
args[1] = "kill"
@ -2055,141 +2098,136 @@ func (c *cmdable) ClientKillByFilter(keys ...string) *IntCmd {
args[2+i] = key
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClientList() *StringCmd {
func (c cmdable) ClientList() *StringCmd {
cmd := NewStringCmd("client", "list")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClientPause(dur time.Duration) *BoolCmd {
func (c cmdable) ClientPause(dur time.Duration) *BoolCmd {
cmd := NewBoolCmd("client", "pause", formatMs(dur))
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClientID() *IntCmd {
func (c cmdable) ClientID() *IntCmd {
cmd := NewIntCmd("client", "id")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClientUnblock(id int64) *IntCmd {
func (c cmdable) ClientUnblock(id int64) *IntCmd {
cmd := NewIntCmd("client", "unblock", id)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClientUnblockWithError(id int64) *IntCmd {
func (c cmdable) ClientUnblockWithError(id int64) *IntCmd {
cmd := NewIntCmd("client", "unblock", id, "error")
c.process(cmd)
_ = c(cmd)
return cmd
}
// ClientSetName assigns a name to the connection.
func (c *statefulCmdable) ClientSetName(name string) *BoolCmd {
func (c statefulCmdable) ClientSetName(name string) *BoolCmd {
cmd := NewBoolCmd("client", "setname", name)
c.process(cmd)
_ = c(cmd)
return cmd
}
// ClientGetName returns the name of the connection.
func (c *cmdable) ClientGetName() *StringCmd {
func (c cmdable) ClientGetName() *StringCmd {
cmd := NewStringCmd("client", "getname")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ConfigGet(parameter string) *SliceCmd {
func (c cmdable) ConfigGet(parameter string) *SliceCmd {
cmd := NewSliceCmd("config", "get", parameter)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ConfigResetStat() *StatusCmd {
func (c cmdable) ConfigResetStat() *StatusCmd {
cmd := NewStatusCmd("config", "resetstat")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ConfigSet(parameter, value string) *StatusCmd {
func (c cmdable) ConfigSet(parameter, value string) *StatusCmd {
cmd := NewStatusCmd("config", "set", parameter, value)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ConfigRewrite() *StatusCmd {
func (c cmdable) ConfigRewrite() *StatusCmd {
cmd := NewStatusCmd("config", "rewrite")
c.process(cmd)
_ = c(cmd)
return cmd
}
// Deperecated. Use DBSize instead.
func (c *cmdable) DbSize() *IntCmd {
func (c cmdable) DbSize() *IntCmd {
return c.DBSize()
}
func (c *cmdable) DBSize() *IntCmd {
func (c cmdable) DBSize() *IntCmd {
cmd := NewIntCmd("dbsize")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) FlushAll() *StatusCmd {
func (c cmdable) FlushAll() *StatusCmd {
cmd := NewStatusCmd("flushall")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) FlushAllAsync() *StatusCmd {
func (c cmdable) FlushAllAsync() *StatusCmd {
cmd := NewStatusCmd("flushall", "async")
c.process(cmd)
_ = c(cmd)
return cmd
}
// Deprecated. Use FlushDB instead.
func (c *cmdable) FlushDb() *StatusCmd {
return c.FlushDB()
}
func (c *cmdable) FlushDB() *StatusCmd {
func (c cmdable) FlushDB() *StatusCmd {
cmd := NewStatusCmd("flushdb")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) FlushDBAsync() *StatusCmd {
func (c cmdable) FlushDBAsync() *StatusCmd {
cmd := NewStatusCmd("flushdb", "async")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Info(section ...string) *StringCmd {
func (c cmdable) Info(section ...string) *StringCmd {
args := []interface{}{"info"}
if len(section) > 0 {
args = append(args, section[0])
}
cmd := NewStringCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) LastSave() *IntCmd {
func (c cmdable) LastSave() *IntCmd {
cmd := NewIntCmd("lastsave")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) Save() *StatusCmd {
func (c cmdable) Save() *StatusCmd {
cmd := NewStatusCmd("save")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) shutdown(modifier string) *StatusCmd {
func (c cmdable) shutdown(modifier string) *StatusCmd {
var args []interface{}
if modifier == "" {
args = []interface{}{"shutdown"}
@ -2197,7 +2235,7 @@ func (c *cmdable) shutdown(modifier string) *StatusCmd {
args = []interface{}{"shutdown", modifier}
}
cmd := NewStatusCmd(args...)
c.process(cmd)
_ = c(cmd)
if err := cmd.Err(); err != nil {
if err == io.EOF {
// Server quit as expected.
@ -2211,41 +2249,41 @@ func (c *cmdable) shutdown(modifier string) *StatusCmd {
return cmd
}
func (c *cmdable) Shutdown() *StatusCmd {
func (c cmdable) Shutdown() *StatusCmd {
return c.shutdown("")
}
func (c *cmdable) ShutdownSave() *StatusCmd {
func (c cmdable) ShutdownSave() *StatusCmd {
return c.shutdown("save")
}
func (c *cmdable) ShutdownNoSave() *StatusCmd {
func (c cmdable) ShutdownNoSave() *StatusCmd {
return c.shutdown("nosave")
}
func (c *cmdable) SlaveOf(host, port string) *StatusCmd {
func (c cmdable) SlaveOf(host, port string) *StatusCmd {
cmd := NewStatusCmd("slaveof", host, port)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) SlowLog() {
func (c cmdable) SlowLog() {
panic("not implemented")
}
func (c *cmdable) Sync() {
func (c cmdable) Sync() {
panic("not implemented")
}
func (c *cmdable) Time() *TimeCmd {
func (c cmdable) Time() *TimeCmd {
cmd := NewTimeCmd("time")
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) Eval(script string, keys []string, args ...interface{}) *Cmd {
func (c cmdable) Eval(script string, keys []string, args ...interface{}) *Cmd {
cmdArgs := make([]interface{}, 3+len(keys), 3+len(keys)+len(args))
cmdArgs[0] = "eval"
cmdArgs[1] = script
@ -2255,11 +2293,11 @@ func (c *cmdable) Eval(script string, keys []string, args ...interface{}) *Cmd {
}
cmdArgs = appendArgs(cmdArgs, args)
cmd := NewCmd(cmdArgs...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) EvalSha(sha1 string, keys []string, args ...interface{}) *Cmd {
func (c cmdable) EvalSha(sha1 string, keys []string, args ...interface{}) *Cmd {
cmdArgs := make([]interface{}, 3+len(keys), 3+len(keys)+len(args))
cmdArgs[0] = "evalsha"
cmdArgs[1] = sha1
@ -2269,11 +2307,11 @@ func (c *cmdable) EvalSha(sha1 string, keys []string, args ...interface{}) *Cmd
}
cmdArgs = appendArgs(cmdArgs, args)
cmd := NewCmd(cmdArgs...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ScriptExists(hashes ...string) *BoolSliceCmd {
func (c cmdable) ScriptExists(hashes ...string) *BoolSliceCmd {
args := make([]interface{}, 2+len(hashes))
args[0] = "script"
args[1] = "exists"
@ -2281,56 +2319,56 @@ func (c *cmdable) ScriptExists(hashes ...string) *BoolSliceCmd {
args[2+i] = hash
}
cmd := NewBoolSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ScriptFlush() *StatusCmd {
func (c cmdable) ScriptFlush() *StatusCmd {
cmd := NewStatusCmd("script", "flush")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ScriptKill() *StatusCmd {
func (c cmdable) ScriptKill() *StatusCmd {
cmd := NewStatusCmd("script", "kill")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ScriptLoad(script string) *StringCmd {
func (c cmdable) ScriptLoad(script string) *StringCmd {
cmd := NewStringCmd("script", "load", script)
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) DebugObject(key string) *StringCmd {
func (c cmdable) DebugObject(key string) *StringCmd {
cmd := NewStringCmd("debug", "object", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
// Publish posts the message to the channel.
func (c *cmdable) Publish(channel string, message interface{}) *IntCmd {
func (c cmdable) Publish(channel string, message interface{}) *IntCmd {
cmd := NewIntCmd("publish", channel, message)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) PubSubChannels(pattern string) *StringSliceCmd {
func (c cmdable) PubSubChannels(pattern string) *StringSliceCmd {
args := []interface{}{"pubsub", "channels"}
if pattern != "*" {
args = append(args, pattern)
}
cmd := NewStringSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) PubSubNumSub(channels ...string) *StringIntMapCmd {
func (c cmdable) PubSubNumSub(channels ...string) *StringIntMapCmd {
args := make([]interface{}, 2+len(channels))
args[0] = "pubsub"
args[1] = "numsub"
@ -2338,91 +2376,91 @@ func (c *cmdable) PubSubNumSub(channels ...string) *StringIntMapCmd {
args[2+i] = channel
}
cmd := NewStringIntMapCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) PubSubNumPat() *IntCmd {
func (c cmdable) PubSubNumPat() *IntCmd {
cmd := NewIntCmd("pubsub", "numpat")
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) ClusterSlots() *ClusterSlotsCmd {
func (c cmdable) ClusterSlots() *ClusterSlotsCmd {
cmd := NewClusterSlotsCmd("cluster", "slots")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterNodes() *StringCmd {
func (c cmdable) ClusterNodes() *StringCmd {
cmd := NewStringCmd("cluster", "nodes")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterMeet(host, port string) *StatusCmd {
func (c cmdable) ClusterMeet(host, port string) *StatusCmd {
cmd := NewStatusCmd("cluster", "meet", host, port)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterForget(nodeID string) *StatusCmd {
func (c cmdable) ClusterForget(nodeID string) *StatusCmd {
cmd := NewStatusCmd("cluster", "forget", nodeID)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterReplicate(nodeID string) *StatusCmd {
func (c cmdable) ClusterReplicate(nodeID string) *StatusCmd {
cmd := NewStatusCmd("cluster", "replicate", nodeID)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterResetSoft() *StatusCmd {
func (c cmdable) ClusterResetSoft() *StatusCmd {
cmd := NewStatusCmd("cluster", "reset", "soft")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterResetHard() *StatusCmd {
func (c cmdable) ClusterResetHard() *StatusCmd {
cmd := NewStatusCmd("cluster", "reset", "hard")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterInfo() *StringCmd {
func (c cmdable) ClusterInfo() *StringCmd {
cmd := NewStringCmd("cluster", "info")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterKeySlot(key string) *IntCmd {
func (c cmdable) ClusterKeySlot(key string) *IntCmd {
cmd := NewIntCmd("cluster", "keyslot", key)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterGetKeysInSlot(slot int, count int) *StringSliceCmd {
func (c cmdable) ClusterGetKeysInSlot(slot int, count int) *StringSliceCmd {
cmd := NewStringSliceCmd("cluster", "getkeysinslot", slot, count)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterCountFailureReports(nodeID string) *IntCmd {
func (c cmdable) ClusterCountFailureReports(nodeID string) *IntCmd {
cmd := NewIntCmd("cluster", "count-failure-reports", nodeID)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterCountKeysInSlot(slot int) *IntCmd {
func (c cmdable) ClusterCountKeysInSlot(slot int) *IntCmd {
cmd := NewIntCmd("cluster", "countkeysinslot", slot)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterDelSlots(slots ...int) *StatusCmd {
func (c cmdable) ClusterDelSlots(slots ...int) *StatusCmd {
args := make([]interface{}, 2+len(slots))
args[0] = "cluster"
args[1] = "delslots"
@ -2430,11 +2468,11 @@ func (c *cmdable) ClusterDelSlots(slots ...int) *StatusCmd {
args[2+i] = slot
}
cmd := NewStatusCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterDelSlotsRange(min, max int) *StatusCmd {
func (c cmdable) ClusterDelSlotsRange(min, max int) *StatusCmd {
size := max - min + 1
slots := make([]int, size)
for i := 0; i < size; i++ {
@ -2443,37 +2481,37 @@ func (c *cmdable) ClusterDelSlotsRange(min, max int) *StatusCmd {
return c.ClusterDelSlots(slots...)
}
func (c *cmdable) ClusterSaveConfig() *StatusCmd {
func (c cmdable) ClusterSaveConfig() *StatusCmd {
cmd := NewStatusCmd("cluster", "saveconfig")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterSlaves(nodeID string) *StringSliceCmd {
func (c cmdable) ClusterSlaves(nodeID string) *StringSliceCmd {
cmd := NewStringSliceCmd("cluster", "slaves", nodeID)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ReadOnly() *StatusCmd {
func (c cmdable) ReadOnly() *StatusCmd {
cmd := NewStatusCmd("readonly")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ReadWrite() *StatusCmd {
func (c cmdable) ReadWrite() *StatusCmd {
cmd := NewStatusCmd("readwrite")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterFailover() *StatusCmd {
func (c cmdable) ClusterFailover() *StatusCmd {
cmd := NewStatusCmd("cluster", "failover")
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterAddSlots(slots ...int) *StatusCmd {
func (c cmdable) ClusterAddSlots(slots ...int) *StatusCmd {
args := make([]interface{}, 2+len(slots))
args[0] = "cluster"
args[1] = "addslots"
@ -2481,11 +2519,11 @@ func (c *cmdable) ClusterAddSlots(slots ...int) *StatusCmd {
args[2+i] = num
}
cmd := NewStatusCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) ClusterAddSlotsRange(min, max int) *StatusCmd {
func (c cmdable) ClusterAddSlotsRange(min, max int) *StatusCmd {
size := max - min + 1
slots := make([]int, size)
for i := 0; i < size; i++ {
@ -2496,7 +2534,7 @@ func (c *cmdable) ClusterAddSlotsRange(min, max int) *StatusCmd {
//------------------------------------------------------------------------------
func (c *cmdable) GeoAdd(key string, geoLocation ...*GeoLocation) *IntCmd {
func (c cmdable) GeoAdd(key string, geoLocation ...*GeoLocation) *IntCmd {
args := make([]interface{}, 2+3*len(geoLocation))
args[0] = "geoadd"
args[1] = key
@ -2506,44 +2544,66 @@ func (c *cmdable) GeoAdd(key string, geoLocation ...*GeoLocation) *IntCmd {
args[2+3*i+2] = eachLoc.Name
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) GeoRadius(key string, longitude, latitude float64, query *GeoRadiusQuery) *GeoLocationCmd {
cmd := NewGeoLocationCmd(query, "georadius", key, longitude, latitude)
c.process(cmd)
return cmd
}
func (c *cmdable) GeoRadiusRO(key string, longitude, latitude float64, query *GeoRadiusQuery) *GeoLocationCmd {
// GeoRadius is a read-only GEORADIUS_RO command.
func (c cmdable) GeoRadius(key string, longitude, latitude float64, query *GeoRadiusQuery) *GeoLocationCmd {
cmd := NewGeoLocationCmd(query, "georadius_ro", key, longitude, latitude)
c.process(cmd)
if query.Store != "" || query.StoreDist != "" {
cmd.SetErr(errors.New("GeoRadius does not support Store or StoreDist"))
return cmd
}
_ = c(cmd)
return cmd
}
func (c *cmdable) GeoRadiusByMember(key, member string, query *GeoRadiusQuery) *GeoLocationCmd {
cmd := NewGeoLocationCmd(query, "georadiusbymember", key, member)
c.process(cmd)
// GeoRadiusStore is a writing GEORADIUS command.
func (c cmdable) GeoRadiusStore(key string, longitude, latitude float64, query *GeoRadiusQuery) *IntCmd {
args := geoLocationArgs(query, "georadius", key, longitude, latitude)
cmd := NewIntCmd(args...)
if query.Store == "" && query.StoreDist == "" {
cmd.SetErr(errors.New("GeoRadiusStore requires Store or StoreDist"))
return cmd
}
_ = c(cmd)
return cmd
}
func (c *cmdable) GeoRadiusByMemberRO(key, member string, query *GeoRadiusQuery) *GeoLocationCmd {
// GeoRadius is a read-only GEORADIUSBYMEMBER_RO command.
func (c cmdable) GeoRadiusByMember(key, member string, query *GeoRadiusQuery) *GeoLocationCmd {
cmd := NewGeoLocationCmd(query, "georadiusbymember_ro", key, member)
c.process(cmd)
if query.Store != "" || query.StoreDist != "" {
cmd.SetErr(errors.New("GeoRadiusByMember does not support Store or StoreDist"))
return cmd
}
_ = c(cmd)
return cmd
}
func (c *cmdable) GeoDist(key string, member1, member2, unit string) *FloatCmd {
// GeoRadiusByMemberStore is a writing GEORADIUSBYMEMBER command.
func (c cmdable) GeoRadiusByMemberStore(key, member string, query *GeoRadiusQuery) *IntCmd {
args := geoLocationArgs(query, "georadiusbymember", key, member)
cmd := NewIntCmd(args...)
if query.Store == "" && query.StoreDist == "" {
cmd.SetErr(errors.New("GeoRadiusByMemberStore requires Store or StoreDist"))
return cmd
}
_ = c(cmd)
return cmd
}
func (c cmdable) GeoDist(key string, member1, member2, unit string) *FloatCmd {
if unit == "" {
unit = "km"
}
cmd := NewFloatCmd("geodist", key, member1, member2, unit)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) GeoHash(key string, members ...string) *StringSliceCmd {
func (c cmdable) GeoHash(key string, members ...string) *StringSliceCmd {
args := make([]interface{}, 2+len(members))
args[0] = "geohash"
args[1] = key
@ -2551,11 +2611,11 @@ func (c *cmdable) GeoHash(key string, members ...string) *StringSliceCmd {
args[2+i] = member
}
cmd := NewStringSliceCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
func (c *cmdable) GeoPos(key string, members ...string) *GeoPosCmd {
func (c cmdable) GeoPos(key string, members ...string) *GeoPosCmd {
args := make([]interface{}, 2+len(members))
args[0] = "geopos"
args[1] = key
@ -2563,13 +2623,13 @@ func (c *cmdable) GeoPos(key string, members ...string) *GeoPosCmd {
args[2+i] = member
}
cmd := NewGeoPosCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}
//------------------------------------------------------------------------------
func (c *cmdable) MemoryUsage(key string, samples ...int) *IntCmd {
func (c cmdable) MemoryUsage(key string, samples ...int) *IntCmd {
args := []interface{}{"memory", "usage", key}
if len(samples) > 0 {
if len(samples) != 1 {
@ -2578,6 +2638,6 @@ func (c *cmdable) MemoryUsage(key string, samples ...int) *IntCmd {
args = append(args, "SAMPLES", samples[0])
}
cmd := NewIntCmd(args...)
c.process(cmd)
_ = c(cmd)
return cmd
}

108
vendor/github.com/go-redis/redis/v7/error.go generated vendored Normal file
View File

@ -0,0 +1,108 @@
package redis
import (
"context"
"io"
"net"
"strings"
"github.com/go-redis/redis/v7/internal/pool"
"github.com/go-redis/redis/v7/internal/proto"
)
var ErrClosed = pool.ErrClosed
type Error interface {
error
// RedisError is a no-op function but
// serves to distinguish types that are Redis
// errors from ordinary errors: a type is a
// Redis error if it has a RedisError method.
RedisError()
}
var _ Error = proto.RedisError("")
func isRetryableError(err error, retryTimeout bool) bool {
switch err {
case nil, context.Canceled, context.DeadlineExceeded:
return false
case io.EOF:
return true
}
if netErr, ok := err.(net.Error); ok {
if netErr.Timeout() {
return retryTimeout
}
return true
}
s := err.Error()
if s == "ERR max number of clients reached" {
return true
}
if strings.HasPrefix(s, "LOADING ") {
return true
}
if strings.HasPrefix(s, "READONLY ") {
return true
}
if strings.HasPrefix(s, "CLUSTERDOWN ") {
return true
}
return false
}
func isRedisError(err error) bool {
_, ok := err.(proto.RedisError)
return ok
}
func isBadConn(err error, allowTimeout bool) bool {
if err == nil {
return false
}
if isRedisError(err) {
// Close connections in read only state in case domain addr is used
// and domain resolves to a different Redis Server. See #790.
return isReadOnlyError(err)
}
if allowTimeout {
if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
return false
}
}
return true
}
func isMovedError(err error) (moved bool, ask bool, addr string) {
if !isRedisError(err) {
return
}
s := err.Error()
switch {
case strings.HasPrefix(s, "MOVED "):
moved = true
case strings.HasPrefix(s, "ASK "):
ask = true
default:
return
}
ind := strings.LastIndex(s, " ")
if ind == -1 {
return false, false, ""
}
addr = s[ind+1:]
return
}
func isLoadingError(err error) bool {
return strings.HasPrefix(err.Error(), "LOADING ")
}
func isReadOnlyError(err error) bool {
return strings.HasPrefix(err.Error(), "READONLY ")
}

15
vendor/github.com/go-redis/redis/v7/go.mod generated vendored Normal file
View File

@ -0,0 +1,15 @@
module github.com/go-redis/redis/v7
require (
github.com/golang/protobuf v1.3.2 // indirect
github.com/kr/pretty v0.1.0 // indirect
github.com/onsi/ginkgo v1.10.1
github.com/onsi/gomega v1.7.0
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 // indirect
golang.org/x/sys v0.0.0-20191010194322-b09406accb47 // indirect
golang.org/x/text v0.3.2 // indirect
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 // indirect
gopkg.in/yaml.v2 v2.2.4 // indirect
)
go 1.11

47
vendor/github.com/go-redis/redis/v7/go.sum generated vendored Normal file
View File

@ -0,0 +1,47 @@
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f h1:wMNYb4v58l5UBM7MYRLPG6ZhfOqbKu7X5eyFl8ZhKvA=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47 h1:/XfQ9z7ib8eEJX2hdgFTZJ/ntt0swNk5oYBziWeTCvY=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

8
vendor/github.com/go-redis/redis/v7/internal/log.go generated vendored Normal file
View File

@ -0,0 +1,8 @@
package internal
import (
"log"
"os"
)
var Logger = log.New(os.Stderr, "redis: ", log.LstdFlags|log.Lshortfile)

View File

@ -0,0 +1,118 @@
package pool
import (
"context"
"net"
"sync/atomic"
"time"
"github.com/go-redis/redis/v7/internal/proto"
)
var noDeadline = time.Time{}
type Conn struct {
netConn net.Conn
rd *proto.Reader
wr *proto.Writer
Inited bool
pooled bool
createdAt time.Time
usedAt int64 // atomic
}
func NewConn(netConn net.Conn) *Conn {
cn := &Conn{
netConn: netConn,
createdAt: time.Now(),
}
cn.rd = proto.NewReader(netConn)
cn.wr = proto.NewWriter(netConn)
cn.SetUsedAt(time.Now())
return cn
}
func (cn *Conn) UsedAt() time.Time {
unix := atomic.LoadInt64(&cn.usedAt)
return time.Unix(unix, 0)
}
func (cn *Conn) SetUsedAt(tm time.Time) {
atomic.StoreInt64(&cn.usedAt, tm.Unix())
}
func (cn *Conn) SetNetConn(netConn net.Conn) {
cn.netConn = netConn
cn.rd.Reset(netConn)
cn.wr.Reset(netConn)
}
func (cn *Conn) Write(b []byte) (int, error) {
return cn.netConn.Write(b)
}
func (cn *Conn) RemoteAddr() net.Addr {
return cn.netConn.RemoteAddr()
}
func (cn *Conn) WithReader(ctx context.Context, timeout time.Duration, fn func(rd *proto.Reader) error) error {
err := cn.netConn.SetReadDeadline(cn.deadline(ctx, timeout))
if err != nil {
return err
}
return fn(cn.rd)
}
func (cn *Conn) WithWriter(
ctx context.Context, timeout time.Duration, fn func(wr *proto.Writer) error,
) error {
err := cn.netConn.SetWriteDeadline(cn.deadline(ctx, timeout))
if err != nil {
return err
}
if cn.wr.Buffered() > 0 {
cn.wr.Reset(cn.netConn)
}
err = fn(cn.wr)
if err != nil {
return err
}
return cn.wr.Flush()
}
func (cn *Conn) Close() error {
return cn.netConn.Close()
}
func (cn *Conn) deadline(ctx context.Context, timeout time.Duration) time.Time {
tm := time.Now()
cn.SetUsedAt(tm)
if timeout > 0 {
tm = tm.Add(timeout)
}
if ctx != nil {
deadline, ok := ctx.Deadline()
if ok {
if timeout == 0 {
return deadline
}
if deadline.Before(tm) {
return deadline
}
return tm
}
}
if timeout > 0 {
return tm
}
return noDeadline
}

View File

@ -1,13 +1,14 @@
package pool
import (
"context"
"errors"
"net"
"sync"
"sync/atomic"
"time"
"github.com/go-redis/redis/internal"
"github.com/go-redis/redis/v7/internal"
)
var ErrClosed = errors.New("redis: client is closed")
@ -33,12 +34,12 @@ type Stats struct {
}
type Pooler interface {
NewConn() (*Conn, error)
NewConn(context.Context) (*Conn, error)
CloseConn(*Conn) error
Get() (*Conn, error)
Get(context.Context) (*Conn, error)
Put(*Conn)
Remove(*Conn)
Remove(*Conn, error)
Len() int
IdleLen() int
@ -48,7 +49,7 @@ type Pooler interface {
}
type Options struct {
Dialer func() (net.Conn, error)
Dialer func(context.Context) (net.Conn, error)
OnClose func(*Conn) error
PoolSize int
@ -77,7 +78,8 @@ type ConnPool struct {
stats Stats
_closed uint32 // atomic
_closed uint32 // atomic
closedCh chan struct{}
}
var _ Pooler = (*ConnPool)(nil)
@ -89,11 +91,12 @@ func NewConnPool(opt *Options) *ConnPool {
queue: make(chan struct{}, opt.PoolSize),
conns: make([]*Conn, 0, opt.PoolSize),
idleConns: make([]*Conn, 0, opt.PoolSize),
closedCh: make(chan struct{}),
}
for i := 0; i < opt.MinIdleConns; i++ {
p.checkMinIdleConns()
}
p.connsMu.Lock()
p.checkMinIdleConns()
p.connsMu.Unlock()
if opt.IdleTimeout > 0 && opt.IdleCheckFrequency > 0 {
go p.reaper(opt.IdleCheckFrequency)
@ -106,31 +109,40 @@ func (p *ConnPool) checkMinIdleConns() {
if p.opt.MinIdleConns == 0 {
return
}
if p.poolSize < p.opt.PoolSize && p.idleConnsLen < p.opt.MinIdleConns {
for p.poolSize < p.opt.PoolSize && p.idleConnsLen < p.opt.MinIdleConns {
p.poolSize++
p.idleConnsLen++
go p.addIdleConn()
go func() {
err := p.addIdleConn()
if err != nil {
p.connsMu.Lock()
p.poolSize--
p.idleConnsLen--
p.connsMu.Unlock()
}
}()
}
}
func (p *ConnPool) addIdleConn() {
cn, err := p.newConn(true)
func (p *ConnPool) addIdleConn() error {
cn, err := p.dialConn(context.TODO(), true)
if err != nil {
return
return err
}
p.connsMu.Lock()
p.conns = append(p.conns, cn)
p.idleConns = append(p.idleConns, cn)
p.connsMu.Unlock()
return nil
}
func (p *ConnPool) NewConn() (*Conn, error) {
return p._NewConn(false)
func (p *ConnPool) NewConn(ctx context.Context) (*Conn, error) {
return p.newConn(ctx, false)
}
func (p *ConnPool) _NewConn(pooled bool) (*Conn, error) {
cn, err := p.newConn(pooled)
func (p *ConnPool) newConn(ctx context.Context, pooled bool) (*Conn, error) {
cn, err := p.dialConn(ctx, pooled)
if err != nil {
return nil, err
}
@ -138,17 +150,18 @@ func (p *ConnPool) _NewConn(pooled bool) (*Conn, error) {
p.connsMu.Lock()
p.conns = append(p.conns, cn)
if pooled {
if p.poolSize < p.opt.PoolSize {
p.poolSize++
} else {
// If pool is full remove the cn on next Put.
if p.poolSize >= p.opt.PoolSize {
cn.pooled = false
} else {
p.poolSize++
}
}
p.connsMu.Unlock()
return cn, nil
}
func (p *ConnPool) newConn(pooled bool) (*Conn, error) {
func (p *ConnPool) dialConn(ctx context.Context, pooled bool) (*Conn, error) {
if p.closed() {
return nil, ErrClosed
}
@ -157,7 +170,7 @@ func (p *ConnPool) newConn(pooled bool) (*Conn, error) {
return nil, p.getLastDialError()
}
netConn, err := p.opt.Dialer()
netConn, err := p.opt.Dialer(ctx)
if err != nil {
p.setLastDialError(err)
if atomic.AddUint32(&p.dialErrorsNum, 1) == uint32(p.opt.PoolSize) {
@ -177,7 +190,7 @@ func (p *ConnPool) tryDial() {
return
}
conn, err := p.opt.Dialer()
conn, err := p.opt.Dialer(context.Background())
if err != nil {
p.setLastDialError(err)
time.Sleep(time.Second)
@ -204,12 +217,12 @@ func (p *ConnPool) getLastDialError() error {
}
// Get returns existed connection from the pool or creates a new one.
func (p *ConnPool) Get() (*Conn, error) {
func (p *ConnPool) Get(ctx context.Context) (*Conn, error) {
if p.closed() {
return nil, ErrClosed
}
err := p.waitTurn()
err := p.waitTurn(ctx)
if err != nil {
return nil, err
}
@ -234,7 +247,7 @@ func (p *ConnPool) Get() (*Conn, error) {
atomic.AddUint32(&p.stats.Misses, 1)
newcn, err := p._NewConn(true)
newcn, err := p.newConn(ctx, true)
if err != nil {
p.freeTurn()
return nil, err
@ -247,26 +260,39 @@ func (p *ConnPool) getTurn() {
p.queue <- struct{}{}
}
func (p *ConnPool) waitTurn() error {
func (p *ConnPool) waitTurn(ctx context.Context) error {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
select {
case p.queue <- struct{}{}:
return nil
default:
timer := timers.Get().(*time.Timer)
timer.Reset(p.opt.PoolTimeout)
}
select {
case p.queue <- struct{}{}:
if !timer.Stop() {
<-timer.C
}
timers.Put(timer)
return nil
case <-timer.C:
timers.Put(timer)
atomic.AddUint32(&p.stats.Timeouts, 1)
return ErrPoolTimeout
timer := timers.Get().(*time.Timer)
timer.Reset(p.opt.PoolTimeout)
select {
case <-ctx.Done():
if !timer.Stop() {
<-timer.C
}
timers.Put(timer)
return ctx.Err()
case p.queue <- struct{}{}:
if !timer.Stop() {
<-timer.C
}
timers.Put(timer)
return nil
case <-timer.C:
timers.Put(timer)
atomic.AddUint32(&p.stats.Timeouts, 1)
return ErrPoolTimeout
}
}
@ -288,8 +314,14 @@ func (p *ConnPool) popIdle() *Conn {
}
func (p *ConnPool) Put(cn *Conn) {
if cn.rd.Buffered() > 0 {
internal.Logger.Printf("Conn has unread data")
p.Remove(cn, BadConnError{})
return
}
if !cn.pooled {
p.Remove(cn)
p.Remove(cn, nil)
return
}
@ -300,19 +332,24 @@ func (p *ConnPool) Put(cn *Conn) {
p.freeTurn()
}
func (p *ConnPool) Remove(cn *Conn) {
p.removeConn(cn)
func (p *ConnPool) Remove(cn *Conn, reason error) {
p.removeConnWithLock(cn)
p.freeTurn()
_ = p.closeConn(cn)
}
func (p *ConnPool) CloseConn(cn *Conn) error {
p.removeConn(cn)
p.removeConnWithLock(cn)
return p.closeConn(cn)
}
func (p *ConnPool) removeConn(cn *Conn) {
func (p *ConnPool) removeConnWithLock(cn *Conn) {
p.connsMu.Lock()
p.removeConn(cn)
p.connsMu.Unlock()
}
func (p *ConnPool) removeConn(cn *Conn) {
for i, c := range p.conns {
if c == cn {
p.conns = append(p.conns[:i], p.conns[i+1:]...)
@ -320,10 +357,9 @@ func (p *ConnPool) removeConn(cn *Conn) {
p.poolSize--
p.checkMinIdleConns()
}
break
return
}
}
p.connsMu.Unlock()
}
func (p *ConnPool) closeConn(cn *Conn) error {
@ -384,6 +420,7 @@ func (p *ConnPool) Close() error {
if !atomic.CompareAndSwapUint32(&p._closed, 0, 1) {
return ErrClosed
}
close(p.closedCh)
var firstErr error
p.connsMu.Lock()
@ -401,6 +438,51 @@ func (p *ConnPool) Close() error {
return firstErr
}
func (p *ConnPool) reaper(frequency time.Duration) {
ticker := time.NewTicker(frequency)
defer ticker.Stop()
for {
select {
case <-ticker.C:
// It is possible that ticker and closedCh arrive together,
// and select pseudo-randomly pick ticker case, we double
// check here to prevent being executed after closed.
if p.closed() {
return
}
_, err := p.ReapStaleConns()
if err != nil {
internal.Logger.Printf("ReapStaleConns failed: %s", err)
continue
}
case <-p.closedCh:
return
}
}
}
func (p *ConnPool) ReapStaleConns() (int, error) {
var n int
for {
p.getTurn()
p.connsMu.Lock()
cn := p.reapStaleConn()
p.connsMu.Unlock()
p.freeTurn()
if cn != nil {
_ = p.closeConn(cn)
n++
} else {
break
}
}
atomic.AddUint32(&p.stats.StaleConns, uint32(n))
return n, nil
}
func (p *ConnPool) reapStaleConn() *Conn {
if len(p.idleConns) == 0 {
return nil
@ -413,52 +495,11 @@ func (p *ConnPool) reapStaleConn() *Conn {
p.idleConns = append(p.idleConns[:0], p.idleConns[1:]...)
p.idleConnsLen--
p.removeConn(cn)
return cn
}
func (p *ConnPool) ReapStaleConns() (int, error) {
var n int
for {
p.getTurn()
p.connsMu.Lock()
cn := p.reapStaleConn()
p.connsMu.Unlock()
if cn != nil {
p.removeConn(cn)
}
p.freeTurn()
if cn != nil {
p.closeConn(cn)
n++
} else {
break
}
}
return n, nil
}
func (p *ConnPool) reaper(frequency time.Duration) {
ticker := time.NewTicker(frequency)
defer ticker.Stop()
for range ticker.C {
if p.closed() {
break
}
n, err := p.ReapStaleConns()
if err != nil {
internal.Logf("ReapStaleConns failed: %s", err)
continue
}
atomic.AddUint32(&p.stats.StaleConns, uint32(n))
}
}
func (p *ConnPool) isStaleConn(cn *Conn) bool {
if p.opt.IdleTimeout == 0 && p.opt.MaxConnAge == 0 {
return false
@ -468,7 +509,7 @@ func (p *ConnPool) isStaleConn(cn *Conn) bool {
if p.opt.IdleTimeout > 0 && now.Sub(cn.UsedAt()) >= p.opt.IdleTimeout {
return true
}
if p.opt.MaxConnAge > 0 && now.Sub(cn.InitedAt) >= p.opt.MaxConnAge {
if p.opt.MaxConnAge > 0 && now.Sub(cn.createdAt) >= p.opt.MaxConnAge {
return true
}

View File

@ -0,0 +1,208 @@
package pool
import (
"context"
"fmt"
"sync/atomic"
)
const (
stateDefault = 0
stateInited = 1
stateClosed = 2
)
type BadConnError struct {
wrapped error
}
var _ error = (*BadConnError)(nil)
func (e BadConnError) Error() string {
s := "redis: Conn is in a bad state"
if e.wrapped != nil {
s += ": " + e.wrapped.Error()
}
return s
}
func (e BadConnError) Unwrap() error {
return e.wrapped
}
type SingleConnPool struct {
pool Pooler
level int32 // atomic
state uint32 // atomic
ch chan *Conn
_badConnError atomic.Value
}
var _ Pooler = (*SingleConnPool)(nil)
func NewSingleConnPool(pool Pooler) *SingleConnPool {
p, ok := pool.(*SingleConnPool)
if !ok {
p = &SingleConnPool{
pool: pool,
ch: make(chan *Conn, 1),
}
}
atomic.AddInt32(&p.level, 1)
return p
}
func (p *SingleConnPool) SetConn(cn *Conn) {
if atomic.CompareAndSwapUint32(&p.state, stateDefault, stateInited) {
p.ch <- cn
} else {
panic("not reached")
}
}
func (p *SingleConnPool) NewConn(ctx context.Context) (*Conn, error) {
return p.pool.NewConn(ctx)
}
func (p *SingleConnPool) CloseConn(cn *Conn) error {
return p.pool.CloseConn(cn)
}
func (p *SingleConnPool) Get(ctx context.Context) (*Conn, error) {
// In worst case this races with Close which is not a very common operation.
for i := 0; i < 1000; i++ {
switch atomic.LoadUint32(&p.state) {
case stateDefault:
cn, err := p.pool.Get(ctx)
if err != nil {
return nil, err
}
if atomic.CompareAndSwapUint32(&p.state, stateDefault, stateInited) {
return cn, nil
}
p.pool.Remove(cn, ErrClosed)
case stateInited:
if err := p.badConnError(); err != nil {
return nil, err
}
cn, ok := <-p.ch
if !ok {
return nil, ErrClosed
}
return cn, nil
case stateClosed:
return nil, ErrClosed
default:
panic("not reached")
}
}
return nil, fmt.Errorf("redis: SingleConnPool.Get: infinite loop")
}
func (p *SingleConnPool) Put(cn *Conn) {
defer func() {
if recover() != nil {
p.freeConn(cn)
}
}()
p.ch <- cn
}
func (p *SingleConnPool) freeConn(cn *Conn) {
if err := p.badConnError(); err != nil {
p.pool.Remove(cn, err)
} else {
p.pool.Put(cn)
}
}
func (p *SingleConnPool) Remove(cn *Conn, reason error) {
defer func() {
if recover() != nil {
p.pool.Remove(cn, ErrClosed)
}
}()
p._badConnError.Store(BadConnError{wrapped: reason})
p.ch <- cn
}
func (p *SingleConnPool) Len() int {
switch atomic.LoadUint32(&p.state) {
case stateDefault:
return 0
case stateInited:
return 1
case stateClosed:
return 0
default:
panic("not reached")
}
}
func (p *SingleConnPool) IdleLen() int {
return len(p.ch)
}
func (p *SingleConnPool) Stats() *Stats {
return &Stats{}
}
func (p *SingleConnPool) Close() error {
level := atomic.AddInt32(&p.level, -1)
if level > 0 {
return nil
}
for i := 0; i < 1000; i++ {
state := atomic.LoadUint32(&p.state)
if state == stateClosed {
return ErrClosed
}
if atomic.CompareAndSwapUint32(&p.state, state, stateClosed) {
close(p.ch)
cn, ok := <-p.ch
if ok {
p.freeConn(cn)
}
return nil
}
}
return fmt.Errorf("redis: SingleConnPool.Close: infinite loop")
}
func (p *SingleConnPool) Reset() error {
if p.badConnError() == nil {
return nil
}
select {
case cn, ok := <-p.ch:
if !ok {
return ErrClosed
}
p.pool.Remove(cn, ErrClosed)
p._badConnError.Store(BadConnError{wrapped: nil})
default:
return fmt.Errorf("redis: SingleConnPool does not have a Conn")
}
if !atomic.CompareAndSwapUint32(&p.state, stateInited, stateDefault) {
state := atomic.LoadUint32(&p.state)
return fmt.Errorf("redis: invalid SingleConnPool state: %d", state)
}
return nil
}
func (p *SingleConnPool) badConnError() error {
if v := p._badConnError.Load(); v != nil {
err := v.(BadConnError)
if err.wrapped != nil {
return err
}
}
return nil
}

View File

@ -1,6 +1,9 @@
package pool
import "sync"
import (
"context"
"sync"
)
type StickyConnPool struct {
pool *ConnPool
@ -20,7 +23,7 @@ func NewStickyConnPool(pool *ConnPool, reusable bool) *StickyConnPool {
}
}
func (p *StickyConnPool) NewConn() (*Conn, error) {
func (p *StickyConnPool) NewConn(context.Context) (*Conn, error) {
panic("not implemented")
}
@ -28,7 +31,7 @@ func (p *StickyConnPool) CloseConn(*Conn) error {
panic("not implemented")
}
func (p *StickyConnPool) Get() (*Conn, error) {
func (p *StickyConnPool) Get(ctx context.Context) (*Conn, error) {
p.mu.Lock()
defer p.mu.Unlock()
@ -39,7 +42,7 @@ func (p *StickyConnPool) Get() (*Conn, error) {
return p.cn, nil
}
cn, err := p.pool.Get()
cn, err := p.pool.Get(ctx)
if err != nil {
return nil, err
}
@ -55,13 +58,13 @@ func (p *StickyConnPool) putUpstream() {
func (p *StickyConnPool) Put(cn *Conn) {}
func (p *StickyConnPool) removeUpstream() {
p.pool.Remove(p.cn)
func (p *StickyConnPool) removeUpstream(reason error) {
p.pool.Remove(p.cn, reason)
p.cn = nil
}
func (p *StickyConnPool) Remove(cn *Conn) {
p.removeUpstream()
func (p *StickyConnPool) Remove(cn *Conn, reason error) {
p.removeUpstream(reason)
}
func (p *StickyConnPool) Len() int {
@ -101,7 +104,7 @@ func (p *StickyConnPool) Close() error {
if p.reusable {
p.putUpstream()
} else {
p.removeUpstream()
p.removeUpstream(ErrClosed)
}
}

View File

@ -4,9 +4,8 @@ import (
"bufio"
"fmt"
"io"
"strconv"
"github.com/go-redis/redis/internal/util"
"github.com/go-redis/redis/v7/internal/util"
)
const (
@ -25,6 +24,8 @@ type RedisError string
func (e RedisError) Error() string { return string(e) }
func (RedisError) RedisError() {}
//------------------------------------------------------------------------------
type MultiBulkParse func(*Reader, int64) (interface{}, error)
@ -41,27 +42,44 @@ func NewReader(rd io.Reader) *Reader {
}
}
func (r *Reader) Buffered() int {
return r.rd.Buffered()
}
func (r *Reader) Peek(n int) ([]byte, error) {
return r.rd.Peek(n)
}
func (r *Reader) Reset(rd io.Reader) {
r.rd.Reset(rd)
}
func (r *Reader) ReadLine() ([]byte, error) {
line, isPrefix, err := r.rd.ReadLine()
line, err := r.readLine()
if err != nil {
return nil, err
}
if isPrefix {
return nil, bufio.ErrBufferFull
}
if len(line) == 0 {
return nil, fmt.Errorf("redis: reply is empty")
}
if isNilReply(line) {
return nil, Nil
}
return line, nil
}
// readLine that returns an error if:
// - there is a pending read error;
// - or line does not end with \r\n.
func (r *Reader) readLine() ([]byte, error) {
b, err := r.rd.ReadSlice('\n')
if err != nil {
return nil, err
}
if len(b) <= 2 || b[len(b)-1] != '\n' || b[len(b)-2] != '\r' {
return nil, fmt.Errorf("redis: invalid reply: %q", b)
}
b = b[:len(b)-2]
return b, nil
}
func (r *Reader) ReadReply(m MultiBulkParse) (interface{}, error) {
line, err := r.ReadLine()
if err != nil {
@ -82,6 +100,10 @@ func (r *Reader) ReadReply(m MultiBulkParse) (interface{}, error) {
if err != nil {
return nil, err
}
if m == nil {
err := fmt.Errorf("redis: got %.100q, but multi bulk parser is nil", line)
return nil, err
}
return m(r, n)
}
return nil, fmt.Errorf("redis: can't parse %.100q", line)
@ -126,7 +148,7 @@ func (r *Reader) readStringReply(line []byte) (string, error) {
return "", Nil
}
replyLen, err := strconv.Atoi(string(line[1:]))
replyLen, err := util.Atoi(line[1:])
if err != nil {
return "", err
}
@ -251,7 +273,7 @@ func (r *Reader) _readTmpBytesReply(line []byte) ([]byte, error) {
return nil, Nil
}
replyLen, err := strconv.Atoi(string(line[1:]))
replyLen, err := util.Atoi(line[1:])
if err != nil {
return nil, err
}
@ -266,10 +288,12 @@ func (r *Reader) _readTmpBytesReply(line []byte) ([]byte, error) {
}
func (r *Reader) buf(n int) []byte {
if d := n - cap(r._buf); d > 0 {
r._buf = append(r._buf, make([]byte, d)...)
if n <= cap(r._buf) {
return r._buf[:n]
}
return r._buf[:n]
d := n - cap(r._buf)
r._buf = append(r._buf, make([]byte, d)...)
return r._buf
}
func isNilReply(b []byte) bool {

View File

@ -5,7 +5,7 @@ import (
"fmt"
"reflect"
"github.com/go-redis/redis/internal/util"
"github.com/go-redis/redis/v7/internal/util"
)
func Scan(b []byte, v interface{}) error {

View File

@ -6,8 +6,9 @@ import (
"fmt"
"io"
"strconv"
"time"
"github.com/go-redis/redis/internal/util"
"github.com/go-redis/redis/v7/internal/util"
)
type Writer struct {
@ -89,9 +90,10 @@ func (w *Writer) writeArg(v interface{}) error {
case bool:
if v {
return w.int(1)
} else {
return w.int(0)
}
return w.int(0)
case time.Time:
return w.string(v.Format(time.RFC3339Nano))
case encoding.BinaryMarshaler:
b, err := v.MarshalBinary()
if err != nil {
@ -150,6 +152,10 @@ func (w *Writer) crlf() error {
return w.wr.WriteByte('\n')
}
func (w *Writer) Buffered() int {
return w.wr.Buffered()
}
func (w *Writer) Reset(wr io.Writer) {
w.wr.Reset(wr)
}

56
vendor/github.com/go-redis/redis/v7/internal/util.go generated vendored Normal file
View File

@ -0,0 +1,56 @@
package internal
import (
"context"
"time"
"github.com/go-redis/redis/v7/internal/util"
)
func Sleep(ctx context.Context, dur time.Duration) error {
t := time.NewTimer(dur)
defer t.Stop()
select {
case <-t.C:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
func ToLower(s string) string {
if isLower(s) {
return s
}
b := make([]byte, len(s))
for i := range b {
c := s[i]
if c >= 'A' && c <= 'Z' {
c += 'a' - 'A'
}
b[i] = c
}
return util.BytesToString(b)
}
func isLower(s string) bool {
for i := 0; i < len(s); i++ {
c := s[i]
if c >= 'A' && c <= 'Z' {
return false
}
}
return true
}
func Unwrap(err error) error {
u, ok := err.(interface {
Unwrap() error
})
if !ok {
return nil
}
return u.Unwrap()
}

View File

@ -1,6 +1,8 @@
package redis
import "sync"
import (
"sync"
)
// ScanIterator is used to incrementally iterate over a collection of elements.
// It's safe for concurrent use by multiple goroutines.
@ -41,10 +43,10 @@ func (it *ScanIterator) Next() bool {
}
// Fetch next page.
if it.cmd._args[0] == "scan" {
it.cmd._args[1] = it.cmd.cursor
if it.cmd.args[0] == "scan" {
it.cmd.args[1] = it.cmd.cursor
} else {
it.cmd._args[2] = it.cmd.cursor
it.cmd.args[2] = it.cmd.cursor
}
err := it.cmd.process(it.cmd)

View File

@ -1,6 +1,7 @@
package redis
import (
"context"
"crypto/tls"
"errors"
"fmt"
@ -11,17 +12,17 @@ import (
"strings"
"time"
"github.com/go-redis/redis/internal/pool"
"github.com/go-redis/redis/v7/internal/pool"
)
// Limiter is the interface of a rate limiter or a circuit breaker.
type Limiter interface {
// Allow returns a nil if operation is allowed or an error otherwise.
// If operation is allowed client must report the result of operation
// whether is a success or a failure.
// Allow returns nil if operation is allowed or an error otherwise.
// If operation is allowed client must ReportResult of the operation
// whether it is a success or a failure.
Allow() error
// ReportResult reports the result of previously allowed operation.
// nil indicates a success, non-nil error indicates a failure.
// ReportResult reports the result of the previously allowed operation.
// nil indicates a success, non-nil error usually indicates a failure.
ReportResult(result error)
}
@ -34,13 +35,18 @@ type Options struct {
// Dialer creates new network connection and has priority over
// Network and Addr options.
Dialer func() (net.Conn, error)
Dialer func(ctx context.Context, network, addr string) (net.Conn, error)
// Hook that is called when new connection is established.
OnConnect func(*Conn) error
// Use the specified Username to authenticate the current connection with one of the connections defined in the ACL
// list when connecting to a Redis 6.0 instance, or greater, that is using the Redis ACL system.
Username string
// Optional password. Must match the password specified in the
// requirepass server configuration option.
// requirepass server configuration option (if connecting to a Redis 5.0 instance, or lower),
// or the User Password when connecting to a Redis 6.0 instance, or greater, that is using the Redis ACL system.
Password string
// Database to be selected after connecting to the server.
DB int
@ -95,26 +101,32 @@ type Options struct {
// TLS Config to use. When set TLS will be negotiated.
TLSConfig *tls.Config
// Limiter interface used to implemented circuit breaker or rate limiter.
Limiter Limiter
}
func (opt *Options) init() {
if opt.Network == "" {
opt.Network = "tcp"
}
if opt.Addr == "" {
opt.Addr = "localhost:6379"
}
if opt.Network == "" {
if strings.HasPrefix(opt.Addr, "/") {
opt.Network = "unix"
} else {
opt.Network = "tcp"
}
}
if opt.Dialer == nil {
opt.Dialer = func() (net.Conn, error) {
opt.Dialer = func(ctx context.Context, network, addr string) (net.Conn, error) {
netDialer := &net.Dialer{
Timeout: opt.DialTimeout,
KeepAlive: 5 * time.Minute,
}
if opt.TLSConfig == nil {
return netDialer.Dial(opt.Network, opt.Addr)
} else {
return tls.DialWithDialer(netDialer, opt.Network, opt.Addr, opt.TLSConfig)
return netDialer.DialContext(ctx, network, addr)
}
return tls.DialWithDialer(netDialer, network, addr, opt.TLSConfig)
}
}
if opt.PoolSize == 0 {
@ -145,6 +157,9 @@ func (opt *Options) init() {
opt.IdleCheckFrequency = time.Minute
}
if opt.MaxRetries == -1 {
opt.MaxRetries = 0
}
switch opt.MinRetryBackoff {
case -1:
opt.MinRetryBackoff = 0
@ -159,6 +174,11 @@ func (opt *Options) init() {
}
}
func (opt *Options) clone() *Options {
clone := *opt
return &clone
}
// ParseURL parses an URL into Options that can be used to connect to Redis.
func ParseURL(redisURL string) (*Options, error) {
o := &Options{Network: "tcp"}
@ -172,6 +192,7 @@ func ParseURL(redisURL string) (*Options, error) {
}
if u.User != nil {
o.Username = u.User.Username()
if p, ok := u.User.Password(); ok {
o.Password = p
}
@ -215,7 +236,9 @@ func ParseURL(redisURL string) (*Options, error) {
func newConnPool(opt *Options) *pool.ConnPool {
return pool.NewConnPool(&pool.Options{
Dialer: opt.Dialer,
Dialer: func(ctx context.Context) (net.Conn, error) {
return opt.Dialer(ctx, opt.Network, opt.Addr)
},
PoolSize: opt.PoolSize,
MinIdleConns: opt.MinIdleConns,
MaxConnAge: opt.MaxConnAge,

View File

@ -1,13 +1,27 @@
package redis
import (
"context"
"sync"
"github.com/go-redis/redis/internal/pool"
"github.com/go-redis/redis/v7/internal/pool"
)
type pipelineExecer func([]Cmder) error
type pipelineExecer func(context.Context, []Cmder) error
// Pipeliner is an mechanism to realise Redis Pipeline technique.
//
// Pipelining is a technique to extremely speed up processing by packing
// operations to batches, send them at once to Redis and read a replies in a
// singe step.
// See https://redis.io/topics/pipelining
//
// Pay attention, that Pipeline is not a transaction, so you can get unexpected
// results in case of big pipelines and small read/write timeouts.
// Redis client has retransmission logic in case of timeouts, pipeline
// can be retransmitted and commands can be executed more then once.
// To avoid this: it is good idea to use reasonable bigger read/write timeouts
// depends of your batch size and/or use TxPipeline.
type Pipeliner interface {
StatefulCmdable
Do(args ...interface{}) *Cmd
@ -15,6 +29,7 @@ type Pipeliner interface {
Close() error
Discard() error
Exec() ([]Cmder, error)
ExecContext(ctx context.Context) ([]Cmder, error)
}
var _ Pipeliner = (*Pipeline)(nil)
@ -23,8 +38,10 @@ var _ Pipeliner = (*Pipeline)(nil)
// http://redis.io/topics/pipelining. It's safe for concurrent use
// by multiple goroutines.
type Pipeline struct {
cmdable
statefulCmdable
ctx context.Context
exec pipelineExecer
mu sync.Mutex
@ -32,6 +49,11 @@ type Pipeline struct {
closed bool
}
func (c *Pipeline) init() {
c.cmdable = c.Process
c.statefulCmdable = c.Process
}
func (c *Pipeline) Do(args ...interface{}) *Cmd {
cmd := NewCmd(args...)
_ = c.Process(cmd)
@ -49,7 +71,7 @@ func (c *Pipeline) Process(cmd Cmder) error {
// Close closes the pipeline, releasing any open resources.
func (c *Pipeline) Close() error {
c.mu.Lock()
c.discard()
_ = c.discard()
c.closed = true
c.mu.Unlock()
return nil
@ -77,6 +99,10 @@ func (c *Pipeline) discard() error {
// Exec always returns list of commands and error of the first failed
// command if any.
func (c *Pipeline) Exec() ([]Cmder, error) {
return c.ExecContext(c.ctx)
}
func (c *Pipeline) ExecContext(ctx context.Context) ([]Cmder, error) {
c.mu.Lock()
defer c.mu.Unlock()
@ -91,10 +117,10 @@ func (c *Pipeline) Exec() ([]Cmder, error) {
cmds := c.cmds
c.cmds = nil
return cmds, c.exec(cmds)
return cmds, c.exec(ctx, cmds)
}
func (c *Pipeline) pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
func (c *Pipeline) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
if err := fn(c); err != nil {
return nil, err
}
@ -103,16 +129,12 @@ func (c *Pipeline) pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return cmds, err
}
func (c *Pipeline) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.pipelined(fn)
}
func (c *Pipeline) Pipeline() Pipeliner {
return c
}
func (c *Pipeline) TxPipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.pipelined(fn)
return c.Pipelined(fn)
}
func (c *Pipeline) TxPipeline() Pipeliner {

View File

@ -1,19 +1,23 @@
package redis
import (
"context"
"errors"
"fmt"
"strings"
"sync"
"time"
"github.com/go-redis/redis/internal"
"github.com/go-redis/redis/internal/pool"
"github.com/go-redis/redis/internal/proto"
"github.com/go-redis/redis/v7/internal"
"github.com/go-redis/redis/v7/internal/pool"
"github.com/go-redis/redis/v7/internal/proto"
)
const pingTimeout = 30 * time.Second
var errPingTimeout = errors.New("redis: ping timeout")
// PubSub implements Pub/Sub commands bas described in
// PubSub implements Pub/Sub commands as described in
// http://redis.io/topics/pubsub. Message receiving is NOT safe
// for concurrent use by multiple goroutines.
//
@ -29,28 +33,36 @@ type PubSub struct {
cn *pool.Conn
channels map[string]struct{}
patterns map[string]struct{}
closed bool
exit chan struct{}
closed bool
exit chan struct{}
cmd *Cmd
chOnce sync.Once
ch chan *Message
msgCh chan *Message
allCh chan interface{}
ping chan struct{}
}
func (c *PubSub) String() string {
channels := mapKeys(c.channels)
channels = append(channels, mapKeys(c.patterns)...)
return fmt.Sprintf("PubSub(%s)", strings.Join(channels, ", "))
}
func (c *PubSub) init() {
c.exit = make(chan struct{})
}
func (c *PubSub) conn() (*pool.Conn, error) {
func (c *PubSub) connWithLock() (*pool.Conn, error) {
c.mu.Lock()
cn, err := c._conn(nil)
cn, err := c.conn(nil)
c.mu.Unlock()
return cn, err
}
func (c *PubSub) _conn(newChannels []string) (*pool.Conn, error) {
func (c *PubSub) conn(newChannels []string) (*pool.Conn, error) {
if c.closed {
return nil, pool.ErrClosed
}
@ -75,8 +87,8 @@ func (c *PubSub) _conn(newChannels []string) (*pool.Conn, error) {
return cn, nil
}
func (c *PubSub) writeCmd(cn *pool.Conn, cmd Cmder) error {
return cn.WithWriter(c.opt.WriteTimeout, func(wr *proto.Writer) error {
func (c *PubSub) writeCmd(ctx context.Context, cn *pool.Conn, cmd Cmder) error {
return cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmd(wr, cmd)
})
}
@ -85,10 +97,7 @@ func (c *PubSub) resubscribe(cn *pool.Conn) error {
var firstErr error
if len(c.channels) > 0 {
err := c._subscribe(cn, "subscribe", mapKeys(c.channels))
if err != nil && firstErr == nil {
firstErr = err
}
firstErr = c._subscribe(cn, "subscribe", mapKeys(c.channels))
}
if len(c.patterns) > 0 {
@ -120,35 +129,35 @@ func (c *PubSub) _subscribe(
args = append(args, channel)
}
cmd := NewSliceCmd(args...)
return c.writeCmd(cn, cmd)
return c.writeCmd(context.TODO(), cn, cmd)
}
func (c *PubSub) releaseConn(cn *pool.Conn, err error, allowTimeout bool) {
func (c *PubSub) releaseConnWithLock(cn *pool.Conn, err error, allowTimeout bool) {
c.mu.Lock()
c._releaseConn(cn, err, allowTimeout)
c.releaseConn(cn, err, allowTimeout)
c.mu.Unlock()
}
func (c *PubSub) _releaseConn(cn *pool.Conn, err error, allowTimeout bool) {
func (c *PubSub) releaseConn(cn *pool.Conn, err error, allowTimeout bool) {
if c.cn != cn {
return
}
if internal.IsBadConn(err, allowTimeout) {
c._reconnect(err)
if isBadConn(err, allowTimeout) {
c.reconnect(err)
}
}
func (c *PubSub) _reconnect(reason error) {
_ = c._closeTheCn(reason)
_, _ = c._conn(nil)
func (c *PubSub) reconnect(reason error) {
_ = c.closeTheCn(reason)
_, _ = c.conn(nil)
}
func (c *PubSub) _closeTheCn(reason error) error {
func (c *PubSub) closeTheCn(reason error) error {
if c.cn == nil {
return nil
}
if !c.closed {
internal.Logf("redis: discarding bad PubSub connection: %s", reason)
internal.Logger.Printf("redis: discarding bad PubSub connection: %s", reason)
}
err := c.closeConn(c.cn)
c.cn = nil
@ -165,8 +174,7 @@ func (c *PubSub) Close() error {
c.closed = true
close(c.exit)
err := c._closeTheCn(pool.ErrClosed)
return err
return c.closeTheCn(pool.ErrClosed)
}
// Subscribe the client to the specified channels. It returns
@ -228,13 +236,13 @@ func (c *PubSub) PUnsubscribe(patterns ...string) error {
}
func (c *PubSub) subscribe(redisCmd string, channels ...string) error {
cn, err := c._conn(channels)
cn, err := c.conn(channels)
if err != nil {
return err
}
err = c._subscribe(cn, redisCmd, channels)
c._releaseConn(cn, err, false)
c.releaseConn(cn, err, false)
return err
}
@ -245,13 +253,13 @@ func (c *PubSub) Ping(payload ...string) error {
}
cmd := NewCmd(args...)
cn, err := c.conn()
cn, err := c.connWithLock()
if err != nil {
return err
}
err = c.writeCmd(cn, cmd)
c.releaseConn(cn, err, false)
err = c.writeCmd(context.TODO(), cn, cmd)
c.releaseConnWithLock(cn, err, false)
return err
}
@ -301,9 +309,11 @@ func (c *PubSub) newMessage(reply interface{}) (interface{}, error) {
case []interface{}:
switch kind := reply[0].(string); kind {
case "subscribe", "unsubscribe", "psubscribe", "punsubscribe":
// Can be nil in case of "unsubscribe".
channel, _ := reply[1].(string)
return &Subscription{
Kind: kind,
Channel: reply[1].(string),
Channel: channel,
Count: int(reply[2].(int64)),
}, nil
case "message":
@ -337,16 +347,16 @@ func (c *PubSub) ReceiveTimeout(timeout time.Duration) (interface{}, error) {
c.cmd = NewCmd()
}
cn, err := c.conn()
cn, err := c.connWithLock()
if err != nil {
return nil, err
}
err = cn.WithReader(timeout, func(rd *proto.Reader) error {
err = cn.WithReader(context.TODO(), timeout, func(rd *proto.Reader) error {
return c.cmd.readReply(rd)
})
c.releaseConn(cn, err, timeout > 0)
c.releaseConnWithLock(cn, err, timeout > 0)
if err != nil {
return nil, err
}
@ -386,63 +396,64 @@ func (c *PubSub) ReceiveMessage() (*Message, error) {
}
// Channel returns a Go channel for concurrently receiving messages.
// It periodically sends Ping messages to test connection health.
// The channel is closed with PubSub. Receive* APIs can not be used
// after channel is created.
// The channel is closed together with the PubSub. If the Go channel
// is blocked full for 30 seconds the message is dropped.
// Receive* APIs can not be used after channel is created.
//
// go-redis periodically sends ping messages to test connection health
// and re-subscribes if ping can not not received for 30 seconds.
func (c *PubSub) Channel() <-chan *Message {
c.chOnce.Do(c.initChannel)
return c.ch
return c.ChannelSize(100)
}
func (c *PubSub) initChannel() {
c.ch = make(chan *Message, 100)
c.ping = make(chan struct{}, 10)
// ChannelSize is like Channel, but creates a Go channel
// with specified buffer size.
func (c *PubSub) ChannelSize(size int) <-chan *Message {
c.chOnce.Do(func() {
c.initPing()
c.initMsgChan(size)
})
if c.msgCh == nil {
err := fmt.Errorf("redis: Channel can't be called after ChannelWithSubscriptions")
panic(err)
}
if cap(c.msgCh) != size {
err := fmt.Errorf("redis: PubSub.Channel size can not be changed once created")
panic(err)
}
return c.msgCh
}
// ChannelWithSubscriptions is like Channel, but message type can be either
// *Subscription or *Message. Subscription messages can be used to detect
// reconnections.
//
// ChannelWithSubscriptions can not be used together with Channel or ChannelSize.
func (c *PubSub) ChannelWithSubscriptions(size int) <-chan interface{} {
c.chOnce.Do(func() {
c.initPing()
c.initAllChan(size)
})
if c.allCh == nil {
err := fmt.Errorf("redis: ChannelWithSubscriptions can't be called after Channel")
panic(err)
}
if cap(c.allCh) != size {
err := fmt.Errorf("redis: PubSub.Channel size can not be changed once created")
panic(err)
}
return c.allCh
}
func (c *PubSub) initPing() {
c.ping = make(chan struct{}, 1)
go func() {
var errCount int
for {
msg, err := c.Receive()
if err != nil {
if err == pool.ErrClosed {
close(c.ch)
return
}
if errCount > 0 {
time.Sleep(c.retryBackoff(errCount))
}
errCount++
continue
}
errCount = 0
// Any message is as good as a ping.
select {
case c.ping <- struct{}{}:
default:
}
switch msg := msg.(type) {
case *Subscription:
// Ignore.
case *Pong:
// Ignore.
case *Message:
c.ch <- msg
default:
internal.Logf("redis: unknown message: %T", msg)
}
}
}()
go func() {
const timeout = 5 * time.Second
timer := time.NewTimer(timeout)
timer := time.NewTimer(pingTimeout)
timer.Stop()
healthy := true
for {
timer.Reset(timeout)
timer.Reset(pingTimeout)
select {
case <-c.ping:
healthy = true
@ -458,7 +469,8 @@ func (c *PubSub) initChannel() {
pingErr = errPingTimeout
}
c.mu.Lock()
c._reconnect(pingErr)
c.reconnect(pingErr)
healthy = true
c.mu.Unlock()
}
case <-c.exit:
@ -468,6 +480,116 @@ func (c *PubSub) initChannel() {
}()
}
// initMsgChan must be in sync with initAllChan.
func (c *PubSub) initMsgChan(size int) {
c.msgCh = make(chan *Message, size)
go func() {
timer := time.NewTimer(pingTimeout)
timer.Stop()
var errCount int
for {
msg, err := c.Receive()
if err != nil {
if err == pool.ErrClosed {
close(c.msgCh)
return
}
if errCount > 0 {
time.Sleep(c.retryBackoff(errCount))
}
errCount++
continue
}
errCount = 0
// Any message is as good as a ping.
select {
case c.ping <- struct{}{}:
default:
}
switch msg := msg.(type) {
case *Subscription:
// Ignore.
case *Pong:
// Ignore.
case *Message:
timer.Reset(pingTimeout)
select {
case c.msgCh <- msg:
if !timer.Stop() {
<-timer.C
}
case <-timer.C:
internal.Logger.Printf(
"redis: %s channel is full for %s (message is dropped)", c, pingTimeout)
}
default:
internal.Logger.Printf("redis: unknown message type: %T", msg)
}
}
}()
}
// initAllChan must be in sync with initMsgChan.
func (c *PubSub) initAllChan(size int) {
c.allCh = make(chan interface{}, size)
go func() {
timer := time.NewTimer(pingTimeout)
timer.Stop()
var errCount int
for {
msg, err := c.Receive()
if err != nil {
if err == pool.ErrClosed {
close(c.allCh)
return
}
if errCount > 0 {
time.Sleep(c.retryBackoff(errCount))
}
errCount++
continue
}
errCount = 0
// Any message is as good as a ping.
select {
case c.ping <- struct{}{}:
default:
}
switch msg := msg.(type) {
case *Subscription:
c.sendMessage(msg, timer)
case *Pong:
// Ignore.
case *Message:
c.sendMessage(msg, timer)
default:
internal.Logger.Printf("redis: unknown message type: %T", msg)
}
}
}()
}
func (c *PubSub) sendMessage(msg interface{}, timer *time.Timer) {
timer.Reset(pingTimeout)
select {
case c.allCh <- msg:
if !timer.Stop() {
<-timer.C
}
case <-timer.C:
internal.Logger.Printf(
"redis: %s channel is full for %s (message is dropped)", c, pingTimeout)
}
}
func (c *PubSub) retryBackoff(attempt int) time.Duration {
return internal.RetryBackoff(attempt, c.opt.MinRetryBackoff, c.opt.MaxRetryBackoff)
}

758
vendor/github.com/go-redis/redis/v7/redis.go generated vendored Normal file
View File

@ -0,0 +1,758 @@
package redis
import (
"context"
"fmt"
"log"
"time"
"github.com/go-redis/redis/v7/internal"
"github.com/go-redis/redis/v7/internal/pool"
"github.com/go-redis/redis/v7/internal/proto"
)
// Nil reply returned by Redis when key does not exist.
const Nil = proto.Nil
func SetLogger(logger *log.Logger) {
internal.Logger = logger
}
//------------------------------------------------------------------------------
type Hook interface {
BeforeProcess(ctx context.Context, cmd Cmder) (context.Context, error)
AfterProcess(ctx context.Context, cmd Cmder) error
BeforeProcessPipeline(ctx context.Context, cmds []Cmder) (context.Context, error)
AfterProcessPipeline(ctx context.Context, cmds []Cmder) error
}
type hooks struct {
hooks []Hook
}
func (hs *hooks) lock() {
hs.hooks = hs.hooks[:len(hs.hooks):len(hs.hooks)]
}
func (hs hooks) clone() hooks {
clone := hs
clone.lock()
return clone
}
func (hs *hooks) AddHook(hook Hook) {
hs.hooks = append(hs.hooks, hook)
}
func (hs hooks) process(
ctx context.Context, cmd Cmder, fn func(context.Context, Cmder) error,
) error {
ctx, err := hs.beforeProcess(ctx, cmd)
if err != nil {
cmd.SetErr(err)
return err
}
cmdErr := fn(ctx, cmd)
if err := hs.afterProcess(ctx, cmd); err != nil {
cmd.SetErr(err)
return err
}
return cmdErr
}
func (hs hooks) beforeProcess(ctx context.Context, cmd Cmder) (context.Context, error) {
for _, h := range hs.hooks {
var err error
ctx, err = h.BeforeProcess(ctx, cmd)
if err != nil {
return nil, err
}
}
return ctx, nil
}
func (hs hooks) afterProcess(ctx context.Context, cmd Cmder) error {
var firstErr error
for _, h := range hs.hooks {
err := h.AfterProcess(ctx, cmd)
if err != nil && firstErr == nil {
firstErr = err
}
}
return firstErr
}
func (hs hooks) processPipeline(
ctx context.Context, cmds []Cmder, fn func(context.Context, []Cmder) error,
) error {
ctx, err := hs.beforeProcessPipeline(ctx, cmds)
if err != nil {
setCmdsErr(cmds, err)
return err
}
cmdsErr := fn(ctx, cmds)
if err := hs.afterProcessPipeline(ctx, cmds); err != nil {
setCmdsErr(cmds, err)
return err
}
return cmdsErr
}
func (hs hooks) beforeProcessPipeline(ctx context.Context, cmds []Cmder) (context.Context, error) {
for _, h := range hs.hooks {
var err error
ctx, err = h.BeforeProcessPipeline(ctx, cmds)
if err != nil {
return nil, err
}
}
return ctx, nil
}
func (hs hooks) afterProcessPipeline(ctx context.Context, cmds []Cmder) error {
var firstErr error
for _, h := range hs.hooks {
err := h.AfterProcessPipeline(ctx, cmds)
if err != nil && firstErr == nil {
firstErr = err
}
}
return firstErr
}
func (hs hooks) processTxPipeline(
ctx context.Context, cmds []Cmder, fn func(context.Context, []Cmder) error,
) error {
cmds = wrapMultiExec(cmds)
return hs.processPipeline(ctx, cmds, fn)
}
//------------------------------------------------------------------------------
type baseClient struct {
opt *Options
connPool pool.Pooler
onClose func() error // hook called when client is closed
}
func newBaseClient(opt *Options, connPool pool.Pooler) *baseClient {
return &baseClient{
opt: opt,
connPool: connPool,
}
}
func (c *baseClient) clone() *baseClient {
clone := *c
return &clone
}
func (c *baseClient) withTimeout(timeout time.Duration) *baseClient {
opt := c.opt.clone()
opt.ReadTimeout = timeout
opt.WriteTimeout = timeout
clone := c.clone()
clone.opt = opt
return clone
}
func (c *baseClient) String() string {
return fmt.Sprintf("Redis<%s db:%d>", c.getAddr(), c.opt.DB)
}
func (c *baseClient) newConn(ctx context.Context) (*pool.Conn, error) {
cn, err := c.connPool.NewConn(ctx)
if err != nil {
return nil, err
}
err = c.initConn(ctx, cn)
if err != nil {
_ = c.connPool.CloseConn(cn)
return nil, err
}
return cn, nil
}
func (c *baseClient) getConn(ctx context.Context) (*pool.Conn, error) {
if c.opt.Limiter != nil {
err := c.opt.Limiter.Allow()
if err != nil {
return nil, err
}
}
cn, err := c._getConn(ctx)
if err != nil {
if c.opt.Limiter != nil {
c.opt.Limiter.ReportResult(err)
}
return nil, err
}
return cn, nil
}
func (c *baseClient) _getConn(ctx context.Context) (*pool.Conn, error) {
cn, err := c.connPool.Get(ctx)
if err != nil {
return nil, err
}
err = c.initConn(ctx, cn)
if err != nil {
c.connPool.Remove(cn, err)
if err := internal.Unwrap(err); err != nil {
return nil, err
}
return nil, err
}
return cn, nil
}
func (c *baseClient) initConn(ctx context.Context, cn *pool.Conn) error {
if cn.Inited {
return nil
}
cn.Inited = true
if c.opt.Password == "" &&
c.opt.DB == 0 &&
!c.opt.readOnly &&
c.opt.OnConnect == nil {
return nil
}
connPool := pool.NewSingleConnPool(nil)
connPool.SetConn(cn)
conn := newConn(ctx, c.opt, connPool)
_, err := conn.Pipelined(func(pipe Pipeliner) error {
if c.opt.Password != "" {
if c.opt.Username != "" {
pipe.AuthACL(c.opt.Username, c.opt.Password)
} else {
pipe.Auth(c.opt.Password)
}
}
if c.opt.DB > 0 {
pipe.Select(c.opt.DB)
}
if c.opt.readOnly {
pipe.ReadOnly()
}
return nil
})
if err != nil {
return err
}
if c.opt.OnConnect != nil {
return c.opt.OnConnect(conn)
}
return nil
}
func (c *baseClient) releaseConn(cn *pool.Conn, err error) {
if c.opt.Limiter != nil {
c.opt.Limiter.ReportResult(err)
}
if isBadConn(err, false) {
c.connPool.Remove(cn, err)
} else {
c.connPool.Put(cn)
}
}
func (c *baseClient) withConn(
ctx context.Context, fn func(context.Context, *pool.Conn) error,
) error {
cn, err := c.getConn(ctx)
if err != nil {
return err
}
defer func() {
c.releaseConn(cn, err)
}()
err = fn(ctx, cn)
return err
}
func (c *baseClient) process(ctx context.Context, cmd Cmder) error {
err := c._process(ctx, cmd)
if err != nil {
cmd.SetErr(err)
return err
}
return nil
}
func (c *baseClient) _process(ctx context.Context, cmd Cmder) error {
var lastErr error
for attempt := 0; attempt <= c.opt.MaxRetries; attempt++ {
if attempt > 0 {
if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
return err
}
}
retryTimeout := true
lastErr = c.withConn(ctx, func(ctx context.Context, cn *pool.Conn) error {
err := cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmd(wr, cmd)
})
if err != nil {
return err
}
err = cn.WithReader(ctx, c.cmdTimeout(cmd), cmd.readReply)
if err != nil {
retryTimeout = cmd.readTimeout() == nil
return err
}
return nil
})
if lastErr == nil || !isRetryableError(lastErr, retryTimeout) {
return lastErr
}
}
return lastErr
}
func (c *baseClient) retryBackoff(attempt int) time.Duration {
return internal.RetryBackoff(attempt, c.opt.MinRetryBackoff, c.opt.MaxRetryBackoff)
}
func (c *baseClient) cmdTimeout(cmd Cmder) time.Duration {
if timeout := cmd.readTimeout(); timeout != nil {
t := *timeout
if t == 0 {
return 0
}
return t + 10*time.Second
}
return c.opt.ReadTimeout
}
// Close closes the client, releasing any open resources.
//
// It is rare to Close a Client, as the Client is meant to be
// long-lived and shared between many goroutines.
func (c *baseClient) Close() error {
var firstErr error
if c.onClose != nil {
if err := c.onClose(); err != nil {
firstErr = err
}
}
if err := c.connPool.Close(); err != nil && firstErr == nil {
firstErr = err
}
return firstErr
}
func (c *baseClient) getAddr() string {
return c.opt.Addr
}
func (c *baseClient) processPipeline(ctx context.Context, cmds []Cmder) error {
return c.generalProcessPipeline(ctx, cmds, c.pipelineProcessCmds)
}
func (c *baseClient) processTxPipeline(ctx context.Context, cmds []Cmder) error {
return c.generalProcessPipeline(ctx, cmds, c.txPipelineProcessCmds)
}
type pipelineProcessor func(context.Context, *pool.Conn, []Cmder) (bool, error)
func (c *baseClient) generalProcessPipeline(
ctx context.Context, cmds []Cmder, p pipelineProcessor,
) error {
err := c._generalProcessPipeline(ctx, cmds, p)
if err != nil {
setCmdsErr(cmds, err)
return err
}
return cmdsFirstErr(cmds)
}
func (c *baseClient) _generalProcessPipeline(
ctx context.Context, cmds []Cmder, p pipelineProcessor,
) error {
var lastErr error
for attempt := 0; attempt <= c.opt.MaxRetries; attempt++ {
if attempt > 0 {
if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
return err
}
}
var canRetry bool
lastErr = c.withConn(ctx, func(ctx context.Context, cn *pool.Conn) error {
var err error
canRetry, err = p(ctx, cn, cmds)
return err
})
if lastErr == nil || !canRetry || !isRetryableError(lastErr, true) {
return lastErr
}
}
return lastErr
}
func (c *baseClient) pipelineProcessCmds(
ctx context.Context, cn *pool.Conn, cmds []Cmder,
) (bool, error) {
err := cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmds(wr, cmds)
})
if err != nil {
return true, err
}
err = cn.WithReader(ctx, c.opt.ReadTimeout, func(rd *proto.Reader) error {
return pipelineReadCmds(rd, cmds)
})
return true, err
}
func pipelineReadCmds(rd *proto.Reader, cmds []Cmder) error {
for _, cmd := range cmds {
err := cmd.readReply(rd)
if err != nil && !isRedisError(err) {
return err
}
}
return nil
}
func (c *baseClient) txPipelineProcessCmds(
ctx context.Context, cn *pool.Conn, cmds []Cmder,
) (bool, error) {
err := cn.WithWriter(ctx, c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmds(wr, cmds)
})
if err != nil {
return true, err
}
err = cn.WithReader(ctx, c.opt.ReadTimeout, func(rd *proto.Reader) error {
statusCmd := cmds[0].(*StatusCmd)
// Trim multi and exec.
cmds = cmds[1 : len(cmds)-1]
err := txPipelineReadQueued(rd, statusCmd, cmds)
if err != nil {
return err
}
return pipelineReadCmds(rd, cmds)
})
return false, err
}
func wrapMultiExec(cmds []Cmder) []Cmder {
if len(cmds) == 0 {
panic("not reached")
}
cmds = append(cmds, make([]Cmder, 2)...)
copy(cmds[1:], cmds[:len(cmds)-2])
cmds[0] = NewStatusCmd("multi")
cmds[len(cmds)-1] = NewSliceCmd("exec")
return cmds
}
func txPipelineReadQueued(rd *proto.Reader, statusCmd *StatusCmd, cmds []Cmder) error {
// Parse queued replies.
if err := statusCmd.readReply(rd); err != nil {
return err
}
for range cmds {
if err := statusCmd.readReply(rd); err != nil && !isRedisError(err) {
return err
}
}
// Parse number of replies.
line, err := rd.ReadLine()
if err != nil {
if err == Nil {
err = TxFailedErr
}
return err
}
switch line[0] {
case proto.ErrorReply:
return proto.ParseErrorReply(line)
case proto.ArrayReply:
// ok
default:
err := fmt.Errorf("redis: expected '*', but got line %q", line)
return err
}
return nil
}
//------------------------------------------------------------------------------
// Client is a Redis client representing a pool of zero or more
// underlying connections. It's safe for concurrent use by multiple
// goroutines.
type Client struct {
*baseClient
cmdable
hooks
ctx context.Context
}
// NewClient returns a client to the Redis Server specified by Options.
func NewClient(opt *Options) *Client {
opt.init()
c := Client{
baseClient: newBaseClient(opt, newConnPool(opt)),
ctx: context.Background(),
}
c.cmdable = c.Process
return &c
}
func (c *Client) clone() *Client {
clone := *c
clone.cmdable = clone.Process
clone.hooks.lock()
return &clone
}
func (c *Client) WithTimeout(timeout time.Duration) *Client {
clone := c.clone()
clone.baseClient = c.baseClient.withTimeout(timeout)
return clone
}
func (c *Client) Context() context.Context {
return c.ctx
}
func (c *Client) WithContext(ctx context.Context) *Client {
if ctx == nil {
panic("nil context")
}
clone := c.clone()
clone.ctx = ctx
return clone
}
func (c *Client) Conn() *Conn {
return newConn(c.ctx, c.opt, pool.NewSingleConnPool(c.connPool))
}
// Do creates a Cmd from the args and processes the cmd.
func (c *Client) Do(args ...interface{}) *Cmd {
return c.DoContext(c.ctx, args...)
}
func (c *Client) DoContext(ctx context.Context, args ...interface{}) *Cmd {
cmd := NewCmd(args...)
_ = c.ProcessContext(ctx, cmd)
return cmd
}
func (c *Client) Process(cmd Cmder) error {
return c.ProcessContext(c.ctx, cmd)
}
func (c *Client) ProcessContext(ctx context.Context, cmd Cmder) error {
return c.hooks.process(ctx, cmd, c.baseClient.process)
}
func (c *Client) processPipeline(ctx context.Context, cmds []Cmder) error {
return c.hooks.processPipeline(ctx, cmds, c.baseClient.processPipeline)
}
func (c *Client) processTxPipeline(ctx context.Context, cmds []Cmder) error {
return c.hooks.processTxPipeline(ctx, cmds, c.baseClient.processTxPipeline)
}
// Options returns read-only Options that were used to create the client.
func (c *Client) Options() *Options {
return c.opt
}
type PoolStats pool.Stats
// PoolStats returns connection pool stats.
func (c *Client) PoolStats() *PoolStats {
stats := c.connPool.Stats()
return (*PoolStats)(stats)
}
func (c *Client) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.Pipeline().Pipelined(fn)
}
func (c *Client) Pipeline() Pipeliner {
pipe := Pipeline{
ctx: c.ctx,
exec: c.processPipeline,
}
pipe.init()
return &pipe
}
func (c *Client) TxPipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.TxPipeline().Pipelined(fn)
}
// TxPipeline acts like Pipeline, but wraps queued commands with MULTI/EXEC.
func (c *Client) TxPipeline() Pipeliner {
pipe := Pipeline{
ctx: c.ctx,
exec: c.processTxPipeline,
}
pipe.init()
return &pipe
}
func (c *Client) pubSub() *PubSub {
pubsub := &PubSub{
opt: c.opt,
newConn: func(channels []string) (*pool.Conn, error) {
return c.newConn(context.TODO())
},
closeConn: c.connPool.CloseConn,
}
pubsub.init()
return pubsub
}
// Subscribe subscribes the client to the specified channels.
// Channels can be omitted to create empty subscription.
// Note that this method does not wait on a response from Redis, so the
// subscription may not be active immediately. To force the connection to wait,
// you may call the Receive() method on the returned *PubSub like so:
//
// sub := client.Subscribe(queryResp)
// iface, err := sub.Receive()
// if err != nil {
// // handle error
// }
//
// // Should be *Subscription, but others are possible if other actions have been
// // taken on sub since it was created.
// switch iface.(type) {
// case *Subscription:
// // subscribe succeeded
// case *Message:
// // received first message
// case *Pong:
// // pong received
// default:
// // handle error
// }
//
// ch := sub.Channel()
func (c *Client) Subscribe(channels ...string) *PubSub {
pubsub := c.pubSub()
if len(channels) > 0 {
_ = pubsub.Subscribe(channels...)
}
return pubsub
}
// PSubscribe subscribes the client to the given patterns.
// Patterns can be omitted to create empty subscription.
func (c *Client) PSubscribe(channels ...string) *PubSub {
pubsub := c.pubSub()
if len(channels) > 0 {
_ = pubsub.PSubscribe(channels...)
}
return pubsub
}
//------------------------------------------------------------------------------
type conn struct {
baseClient
cmdable
statefulCmdable
}
// Conn is like Client, but its pool contains single connection.
type Conn struct {
*conn
ctx context.Context
}
func newConn(ctx context.Context, opt *Options, connPool pool.Pooler) *Conn {
c := Conn{
conn: &conn{
baseClient: baseClient{
opt: opt,
connPool: connPool,
},
},
ctx: ctx,
}
c.cmdable = c.Process
c.statefulCmdable = c.Process
return &c
}
func (c *Conn) Process(cmd Cmder) error {
return c.ProcessContext(c.ctx, cmd)
}
func (c *Conn) ProcessContext(ctx context.Context, cmd Cmder) error {
return c.baseClient.process(ctx, cmd)
}
func (c *Conn) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.Pipeline().Pipelined(fn)
}
func (c *Conn) Pipeline() Pipeliner {
pipe := Pipeline{
ctx: c.ctx,
exec: c.processPipeline,
}
pipe.init()
return &pipe
}
func (c *Conn) TxPipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.TxPipeline().Pipelined(fn)
}
// TxPipeline acts like Pipeline, but wraps queued commands with MULTI/EXEC.
func (c *Conn) TxPipeline() Pipeliner {
pipe := Pipeline{
ctx: c.ctx,
exec: c.processTxPipeline,
}
pipe.init()
return &pipe
}

View File

@ -6,7 +6,7 @@ import "time"
func NewCmdResult(val interface{}, err error) *Cmd {
var cmd Cmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -14,7 +14,7 @@ func NewCmdResult(val interface{}, err error) *Cmd {
func NewSliceResult(val []interface{}, err error) *SliceCmd {
var cmd SliceCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -22,7 +22,7 @@ func NewSliceResult(val []interface{}, err error) *SliceCmd {
func NewStatusResult(val string, err error) *StatusCmd {
var cmd StatusCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -30,7 +30,7 @@ func NewStatusResult(val string, err error) *StatusCmd {
func NewIntResult(val int64, err error) *IntCmd {
var cmd IntCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -38,7 +38,7 @@ func NewIntResult(val int64, err error) *IntCmd {
func NewDurationResult(val time.Duration, err error) *DurationCmd {
var cmd DurationCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -46,7 +46,7 @@ func NewDurationResult(val time.Duration, err error) *DurationCmd {
func NewBoolResult(val bool, err error) *BoolCmd {
var cmd BoolCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -54,7 +54,7 @@ func NewBoolResult(val bool, err error) *BoolCmd {
func NewStringResult(val string, err error) *StringCmd {
var cmd StringCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -62,7 +62,7 @@ func NewStringResult(val string, err error) *StringCmd {
func NewFloatResult(val float64, err error) *FloatCmd {
var cmd FloatCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -70,7 +70,7 @@ func NewFloatResult(val float64, err error) *FloatCmd {
func NewStringSliceResult(val []string, err error) *StringSliceCmd {
var cmd StringSliceCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -78,7 +78,7 @@ func NewStringSliceResult(val []string, err error) *StringSliceCmd {
func NewBoolSliceResult(val []bool, err error) *BoolSliceCmd {
var cmd BoolSliceCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -86,7 +86,7 @@ func NewBoolSliceResult(val []bool, err error) *BoolSliceCmd {
func NewStringStringMapResult(val map[string]string, err error) *StringStringMapCmd {
var cmd StringStringMapCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -94,7 +94,15 @@ func NewStringStringMapResult(val map[string]string, err error) *StringStringMap
func NewStringIntMapCmdResult(val map[string]int64, err error) *StringIntMapCmd {
var cmd StringIntMapCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
// NewTimeCmdResult returns a TimeCmd initialised with val and err for testing
func NewTimeCmdResult(val time.Time, err error) *TimeCmd {
var cmd TimeCmd
cmd.val = val
cmd.SetErr(err)
return &cmd
}
@ -102,7 +110,15 @@ func NewStringIntMapCmdResult(val map[string]int64, err error) *StringIntMapCmd
func NewZSliceCmdResult(val []Z, err error) *ZSliceCmd {
var cmd ZSliceCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
// NewZWithKeyCmdResult returns a NewZWithKeyCmd initialised with val and err for testing
func NewZWithKeyCmdResult(val *ZWithKey, err error) *ZWithKeyCmd {
var cmd ZWithKeyCmd
cmd.val = val
cmd.SetErr(err)
return &cmd
}
@ -111,7 +127,7 @@ func NewScanCmdResult(keys []string, cursor uint64, err error) *ScanCmd {
var cmd ScanCmd
cmd.page = keys
cmd.cursor = cursor
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -119,7 +135,7 @@ func NewScanCmdResult(keys []string, cursor uint64, err error) *ScanCmd {
func NewClusterSlotsCmdResult(val []ClusterSlot, err error) *ClusterSlotsCmd {
var cmd ClusterSlotsCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
@ -127,7 +143,15 @@ func NewClusterSlotsCmdResult(val []ClusterSlot, err error) *ClusterSlotsCmd {
func NewGeoLocationCmdResult(val []GeoLocation, err error) *GeoLocationCmd {
var cmd GeoLocationCmd
cmd.locations = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
// NewGeoPosCmdResult returns a GeoPosCmd initialised with val and err for testing
func NewGeoPosCmdResult(val []*GeoPos, err error) *GeoPosCmd {
var cmd GeoPosCmd
cmd.val = val
cmd.SetErr(err)
return &cmd
}
@ -135,6 +159,22 @@ func NewGeoLocationCmdResult(val []GeoLocation, err error) *GeoLocationCmd {
func NewCommandsInfoCmdResult(val map[string]*CommandInfo, err error) *CommandsInfoCmd {
var cmd CommandsInfoCmd
cmd.val = val
cmd.setErr(err)
cmd.SetErr(err)
return &cmd
}
// NewXMessageSliceCmdResult returns a XMessageSliceCmd initialised with val and err for testing
func NewXMessageSliceCmdResult(val []XMessage, err error) *XMessageSliceCmd {
var cmd XMessageSliceCmd
cmd.val = val
cmd.SetErr(err)
return &cmd
}
// NewXStreamSliceCmdResult returns a XStreamSliceCmd initialised with val and err for testing
func NewXStreamSliceCmdResult(val []XStream, err error) *XStreamSliceCmd {
var cmd XStreamSliceCmd
cmd.val = val
cmd.SetErr(err)
return &cmd
}

View File

@ -10,10 +10,10 @@ import (
"sync/atomic"
"time"
"github.com/go-redis/redis/internal"
"github.com/go-redis/redis/internal/consistenthash"
"github.com/go-redis/redis/internal/hashtag"
"github.com/go-redis/redis/internal/pool"
"github.com/go-redis/redis/v7/internal"
"github.com/go-redis/redis/v7/internal/consistenthash"
"github.com/go-redis/redis/v7/internal/hashtag"
"github.com/go-redis/redis/v7/internal/pool"
)
// Hash is type of hash function used in consistent hash.
@ -27,6 +27,10 @@ type RingOptions struct {
// Map of name => host:port addresses of ring shards.
Addrs map[string]string
// Map of name => password of ring shards, to allow different shards to have
// different passwords. It will be ignored if the Password field is set.
Passwords map[string]string
// Frequency of PING commands sent to check shards availability.
// Shard is considered down after 3 subsequent failed checks.
HeartbeatFrequency time.Duration
@ -52,6 +56,12 @@ type RingOptions struct {
// See https://arxiv.org/abs/1406.2294 for reference
HashReplicas int
// NewClient creates a shard client with provided name and options.
NewClient func(name string, opt *Options) *Client
// Optional hook that is called when a new shard is created.
OnNewShard func(*Client)
// Following options are copied from Options struct.
OnConnect func(*Conn) error
@ -98,12 +108,12 @@ func (opt *RingOptions) init() {
}
}
func (opt *RingOptions) clientOptions() *Options {
func (opt *RingOptions) clientOptions(shard string) *Options {
return &Options{
OnConnect: opt.OnConnect,
DB: opt.DB,
Password: opt.Password,
Password: opt.getPassword(shard),
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
@ -118,6 +128,13 @@ func (opt *RingOptions) clientOptions() *Options {
}
}
func (opt *RingOptions) getPassword(shard string) string {
if opt.Password == "" {
return opt.Passwords[shard]
}
return opt.Password
}
//------------------------------------------------------------------------------
type ringShard struct {
@ -260,7 +277,7 @@ func (c *ringShards) Heartbeat(frequency time.Duration) {
for _, shard := range shards {
err := shard.Client.Ping().Err()
if shard.Vote(err == nil || err == pool.ErrPoolTimeout) {
internal.Logf("ring shard state changed: %s", shard)
internal.Logger.Printf("ring shard state changed: %s", shard)
rebalance = true
}
}
@ -273,9 +290,13 @@ func (c *ringShards) Heartbeat(frequency time.Duration) {
// rebalance removes dead shards from the Ring.
func (c *ringShards) rebalance() {
c.mu.RLock()
shards := c.shards
c.mu.RUnlock()
hash := newConsistentHash(c.opt)
var shardsNum int
for name, shard := range c.shards {
for name, shard := range shards {
if shard.IsUp() {
hash.Add(name)
shardsNum++
@ -319,6 +340,12 @@ func (c *ringShards) Close() error {
//------------------------------------------------------------------------------
type ring struct {
opt *RingOptions
shards *ringShards
cmdsInfoCache *cmdsInfoCache //nolint:structcheck
}
// Ring is a Redis client that uses consistent hashing to distribute
// keys across multiple Redis servers (shards). It's safe for
// concurrent use by multiple goroutines.
@ -334,61 +361,82 @@ func (c *ringShards) Close() error {
// and can tolerate losing data when one of the servers dies.
// Otherwise you should use Redis Cluster.
type Ring struct {
*ring
cmdable
hooks
ctx context.Context
opt *RingOptions
shards *ringShards
cmdsInfoCache *cmdsInfoCache
process func(Cmder) error
processPipeline func([]Cmder) error
}
func NewRing(opt *RingOptions) *Ring {
opt.init()
ring := &Ring{
opt: opt,
shards: newRingShards(opt),
ring := Ring{
ring: &ring{
opt: opt,
shards: newRingShards(opt),
},
ctx: context.Background(),
}
ring.cmdsInfoCache = newCmdsInfoCache(ring.cmdsInfo)
ring.process = ring.defaultProcess
ring.processPipeline = ring.defaultProcessPipeline
ring.cmdable.setProcessor(ring.Process)
ring.cmdable = ring.Process
for name, addr := range opt.Addrs {
clopt := opt.clientOptions()
clopt.Addr = addr
ring.shards.Add(name, NewClient(clopt))
shard := newRingShard(opt, name, addr)
ring.shards.Add(name, shard)
}
go ring.shards.Heartbeat(opt.HeartbeatFrequency)
return ring
return &ring
}
func newRingShard(opt *RingOptions, name, addr string) *Client {
clopt := opt.clientOptions(name)
clopt.Addr = addr
var shard *Client
if opt.NewClient != nil {
shard = opt.NewClient(name, clopt)
} else {
shard = NewClient(clopt)
}
if opt.OnNewShard != nil {
opt.OnNewShard(shard)
}
return shard
}
func (c *Ring) Context() context.Context {
if c.ctx != nil {
return c.ctx
}
return context.Background()
return c.ctx
}
func (c *Ring) WithContext(ctx context.Context) *Ring {
if ctx == nil {
panic("nil context")
}
c2 := c.copy()
c2.ctx = ctx
return c2
clone := *c
clone.cmdable = clone.Process
clone.hooks.lock()
clone.ctx = ctx
return &clone
}
func (c *Ring) copy() *Ring {
cp := *c
return &cp
// Do creates a Cmd from the args and processes the cmd.
func (c *Ring) Do(args ...interface{}) *Cmd {
return c.DoContext(c.ctx, args...)
}
func (c *Ring) DoContext(ctx context.Context, args ...interface{}) *Cmd {
cmd := NewCmd(args...)
_ = c.ProcessContext(ctx, cmd)
return cmd
}
func (c *Ring) Process(cmd Cmder) error {
return c.ProcessContext(c.ctx, cmd)
}
func (c *Ring) ProcessContext(ctx context.Context, cmd Cmder) error {
return c.hooks.process(ctx, cmd, c.process)
}
// Options returns read-only Options that were used to create the client.
@ -428,7 +476,7 @@ func (c *Ring) Subscribe(channels ...string) *PubSub {
shard, err := c.shards.GetByKey(channels[0])
if err != nil {
// TODO: return PubSub with sticky error
//TODO: return PubSub with sticky error
panic(err)
}
return shard.Client.Subscribe(channels...)
@ -442,7 +490,7 @@ func (c *Ring) PSubscribe(channels ...string) *PubSub {
shard, err := c.shards.GetByKey(channels[0])
if err != nil {
// TODO: return PubSub with sticky error
//TODO: return PubSub with sticky error
panic(err)
}
return shard.Client.PSubscribe(channels...)
@ -503,7 +551,7 @@ func (c *Ring) cmdInfo(name string) *CommandInfo {
}
info := cmdsInfo[name]
if info == nil {
internal.Logf("info for cmd=%s not found", name)
internal.Logger.Printf("info for cmd=%s not found", name)
}
return info
}
@ -518,65 +566,78 @@ func (c *Ring) cmdShard(cmd Cmder) (*ringShard, error) {
return c.shards.GetByKey(firstKey)
}
// Do creates a Cmd from the args and processes the cmd.
func (c *Ring) Do(args ...interface{}) *Cmd {
cmd := NewCmd(args...)
c.Process(cmd)
return cmd
func (c *Ring) process(ctx context.Context, cmd Cmder) error {
err := c._process(ctx, cmd)
if err != nil {
cmd.SetErr(err)
return err
}
return nil
}
func (c *Ring) WrapProcess(
fn func(oldProcess func(cmd Cmder) error) func(cmd Cmder) error,
) {
c.process = fn(c.process)
}
func (c *Ring) Process(cmd Cmder) error {
return c.process(cmd)
}
func (c *Ring) defaultProcess(cmd Cmder) error {
func (c *Ring) _process(ctx context.Context, cmd Cmder) error {
var lastErr error
for attempt := 0; attempt <= c.opt.MaxRetries; attempt++ {
if attempt > 0 {
time.Sleep(c.retryBackoff(attempt))
if err := internal.Sleep(ctx, c.retryBackoff(attempt)); err != nil {
return err
}
}
shard, err := c.cmdShard(cmd)
if err != nil {
cmd.setErr(err)
return err
}
err = shard.Client.Process(cmd)
if err == nil {
return nil
}
if !internal.IsRetryableError(err, cmd.readTimeout() == nil) {
return err
lastErr = shard.Client.ProcessContext(ctx, cmd)
if lastErr == nil || !isRetryableError(lastErr, cmd.readTimeout() == nil) {
return lastErr
}
}
return cmd.Err()
}
func (c *Ring) Pipeline() Pipeliner {
pipe := Pipeline{
exec: c.processPipeline,
}
pipe.cmdable.setProcessor(pipe.Process)
return &pipe
return lastErr
}
func (c *Ring) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.Pipeline().Pipelined(fn)
}
func (c *Ring) WrapProcessPipeline(
fn func(oldProcess func([]Cmder) error) func([]Cmder) error,
) {
c.processPipeline = fn(c.processPipeline)
func (c *Ring) Pipeline() Pipeliner {
pipe := Pipeline{
ctx: c.ctx,
exec: c.processPipeline,
}
pipe.init()
return &pipe
}
func (c *Ring) defaultProcessPipeline(cmds []Cmder) error {
func (c *Ring) processPipeline(ctx context.Context, cmds []Cmder) error {
return c.hooks.processPipeline(ctx, cmds, func(ctx context.Context, cmds []Cmder) error {
return c.generalProcessPipeline(ctx, cmds, false)
})
}
func (c *Ring) TxPipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.TxPipeline().Pipelined(fn)
}
func (c *Ring) TxPipeline() Pipeliner {
pipe := Pipeline{
ctx: c.ctx,
exec: c.processTxPipeline,
}
pipe.init()
return &pipe
}
func (c *Ring) processTxPipeline(ctx context.Context, cmds []Cmder) error {
return c.hooks.processPipeline(ctx, cmds, func(ctx context.Context, cmds []Cmder) error {
return c.generalProcessPipeline(ctx, cmds, true)
})
}
func (c *Ring) generalProcessPipeline(
ctx context.Context, cmds []Cmder, tx bool,
) error {
cmdsMap := make(map[string][]Cmder)
for _, cmd := range cmds {
cmdInfo := c.cmdInfo(cmd.Name())
@ -587,62 +648,36 @@ func (c *Ring) defaultProcessPipeline(cmds []Cmder) error {
cmdsMap[hash] = append(cmdsMap[hash], cmd)
}
for attempt := 0; attempt <= c.opt.MaxRetries; attempt++ {
if attempt > 0 {
time.Sleep(c.retryBackoff(attempt))
}
var wg sync.WaitGroup
for hash, cmds := range cmdsMap {
wg.Add(1)
go func(hash string, cmds []Cmder) {
defer wg.Done()
var mu sync.Mutex
var failedCmdsMap map[string][]Cmder
var wg sync.WaitGroup
for hash, cmds := range cmdsMap {
wg.Add(1)
go func(hash string, cmds []Cmder) {
defer wg.Done()
shard, err := c.shards.GetByHash(hash)
if err != nil {
setCmdsErr(cmds, err)
return
}
cn, err := shard.Client.getConn()
if err != nil {
setCmdsErr(cmds, err)
return
}
canRetry, err := shard.Client.pipelineProcessCmds(cn, cmds)
shard.Client.releaseConnStrict(cn, err)
if canRetry && internal.IsRetryableError(err, true) {
mu.Lock()
if failedCmdsMap == nil {
failedCmdsMap = make(map[string][]Cmder)
}
failedCmdsMap[hash] = cmds
mu.Unlock()
}
}(hash, cmds)
}
wg.Wait()
if len(failedCmdsMap) == 0 {
break
}
cmdsMap = failedCmdsMap
_ = c.processShardPipeline(ctx, hash, cmds, tx)
}(hash, cmds)
}
wg.Wait()
return cmdsFirstErr(cmds)
}
func (c *Ring) TxPipeline() Pipeliner {
panic("not implemented")
}
func (c *Ring) processShardPipeline(
ctx context.Context, hash string, cmds []Cmder, tx bool,
) error {
//TODO: retry?
shard, err := c.shards.GetByHash(hash)
if err != nil {
setCmdsErr(cmds, err)
return err
}
func (c *Ring) TxPipelined(fn func(Pipeliner) error) ([]Cmder, error) {
panic("not implemented")
if tx {
err = shard.Client.processTxPipeline(ctx, cmds)
} else {
err = shard.Client.processPipeline(ctx, cmds)
}
return err
}
// Close closes the ring client, releasing any open resources.
@ -653,6 +688,39 @@ func (c *Ring) Close() error {
return c.shards.Close()
}
func (c *Ring) Watch(fn func(*Tx) error, keys ...string) error {
if len(keys) == 0 {
return fmt.Errorf("redis: Watch requires at least one key")
}
var shards []*ringShard
for _, key := range keys {
if key != "" {
shard, err := c.shards.GetByKey(hashtag.Key(key))
if err != nil {
return err
}
shards = append(shards, shard)
}
}
if len(shards) == 0 {
return fmt.Errorf("redis: Watch requires at least one shard")
}
if len(shards) > 1 {
for _, shard := range shards[1:] {
if shard.Client != shards[0].Client {
err := fmt.Errorf("redis: Watch requires all keys to be in the same shard")
return err
}
}
}
return shards[0].Client.Watch(fn, keys...)
}
func newConsistentHash(opt *RingOptions) *consistenthash.Map {
return consistenthash.New(opt.HashReplicas, consistenthash.Hash(opt.Hash))
}

View File

@ -24,7 +24,7 @@ type Script struct {
func NewScript(src string) *Script {
h := sha1.New()
io.WriteString(h, src)
_, _ = io.WriteString(h, src)
return &Script{
src: src,
hash: hex.EncodeToString(h.Sum(nil)),

View File

@ -1,6 +1,7 @@
package redis
import (
"context"
"crypto/tls"
"errors"
"net"
@ -8,8 +9,8 @@ import (
"sync"
"time"
"github.com/go-redis/redis/internal"
"github.com/go-redis/redis/internal/pool"
"github.com/go-redis/redis/v7/internal"
"github.com/go-redis/redis/v7/internal/pool"
)
//------------------------------------------------------------------------------
@ -20,12 +21,16 @@ type FailoverOptions struct {
// The master name.
MasterName string
// A seed list of host:port addresses of sentinel nodes.
SentinelAddrs []string
SentinelAddrs []string
SentinelUsername string
SentinelPassword string
// Following options are copied from Options struct.
Dialer func(ctx context.Context, network, addr string) (net.Conn, error)
OnConnect func(*Conn) error
Username string
Password string
DB int
@ -49,14 +54,17 @@ type FailoverOptions struct {
func (opt *FailoverOptions) options() *Options {
return &Options{
Addr: "FailoverClient",
Addr: "FailoverClient",
Dialer: opt.Dialer,
OnConnect: opt.OnConnect,
DB: opt.DB,
Username: opt.Username,
Password: opt.Password,
MaxRetries: opt.MaxRetries,
MaxRetries: opt.MaxRetries,
MinRetryBackoff: opt.MinRetryBackoff,
MaxRetryBackoff: opt.MaxRetryBackoff,
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
@ -66,6 +74,8 @@ func (opt *FailoverOptions) options() *Options {
PoolTimeout: opt.PoolTimeout,
IdleTimeout: opt.IdleTimeout,
IdleCheckFrequency: opt.IdleCheckFrequency,
MinIdleConns: opt.MinIdleConns,
MaxConnAge: opt.MaxConnAge,
TLSConfig: opt.TLSConfig,
}
@ -81,22 +91,18 @@ func NewFailoverClient(failoverOpt *FailoverOptions) *Client {
failover := &sentinelFailover{
masterName: failoverOpt.MasterName,
sentinelAddrs: failoverOpt.SentinelAddrs,
username: failoverOpt.SentinelUsername,
password: failoverOpt.SentinelPassword,
opt: opt,
}
c := Client{
baseClient: baseClient{
opt: opt,
connPool: failover.Pool(),
onClose: func() error {
return failover.Close()
},
},
baseClient: newBaseClient(opt, failover.Pool()),
ctx: context.Background(),
}
c.baseClient.init()
c.cmdable.setProcessor(c.Process)
c.cmdable = c.Process
c.onClose = failover.Close
return &c
}
@ -104,27 +110,49 @@ func NewFailoverClient(failoverOpt *FailoverOptions) *Client {
//------------------------------------------------------------------------------
type SentinelClient struct {
baseClient
*baseClient
ctx context.Context
}
func NewSentinelClient(opt *Options) *SentinelClient {
opt.init()
c := &SentinelClient{
baseClient: baseClient{
baseClient: &baseClient{
opt: opt,
connPool: newConnPool(opt),
},
ctx: context.Background(),
}
c.baseClient.init()
return c
}
func (c *SentinelClient) Context() context.Context {
return c.ctx
}
func (c *SentinelClient) WithContext(ctx context.Context) *SentinelClient {
if ctx == nil {
panic("nil context")
}
clone := *c
clone.ctx = ctx
return &clone
}
func (c *SentinelClient) Process(cmd Cmder) error {
return c.ProcessContext(c.ctx, cmd)
}
func (c *SentinelClient) ProcessContext(ctx context.Context, cmd Cmder) error {
return c.baseClient.process(ctx, cmd)
}
func (c *SentinelClient) pubSub() *PubSub {
pubsub := &PubSub{
opt: c.opt,
newConn: func(channels []string) (*pool.Conn, error) {
return c.newConn()
return c.newConn(context.TODO())
},
closeConn: c.connPool.CloseConn,
}
@ -132,6 +160,14 @@ func (c *SentinelClient) pubSub() *PubSub {
return pubsub
}
// Ping is used to test if a connection is still alive, or to
// measure latency.
func (c *SentinelClient) Ping() *StringCmd {
cmd := NewStringCmd("ping")
_ = c.Process(cmd)
return cmd
}
// Subscribe subscribes the client to the specified channels.
// Channels can be omitted to create empty subscription.
func (c *SentinelClient) Subscribe(channels ...string) *PubSub {
@ -154,13 +190,13 @@ func (c *SentinelClient) PSubscribe(channels ...string) *PubSub {
func (c *SentinelClient) GetMasterAddrByName(name string) *StringSliceCmd {
cmd := NewStringSliceCmd("sentinel", "get-master-addr-by-name", name)
c.Process(cmd)
_ = c.Process(cmd)
return cmd
}
func (c *SentinelClient) Sentinels(name string) *SliceCmd {
cmd := NewSliceCmd("sentinel", "sentinels", name)
c.Process(cmd)
_ = c.Process(cmd)
return cmd
}
@ -168,7 +204,7 @@ func (c *SentinelClient) Sentinels(name string) *SliceCmd {
// asking for agreement to other Sentinels.
func (c *SentinelClient) Failover(name string) *StatusCmd {
cmd := NewStatusCmd("sentinel", "failover", name)
c.Process(cmd)
_ = c.Process(cmd)
return cmd
}
@ -178,14 +214,79 @@ func (c *SentinelClient) Failover(name string) *StatusCmd {
// already discovered and associated with the master.
func (c *SentinelClient) Reset(pattern string) *IntCmd {
cmd := NewIntCmd("sentinel", "reset", pattern)
c.Process(cmd)
_ = c.Process(cmd)
return cmd
}
// FlushConfig forces Sentinel to rewrite its configuration on disk, including
// the current Sentinel state.
func (c *SentinelClient) FlushConfig() *StatusCmd {
cmd := NewStatusCmd("sentinel", "flushconfig")
_ = c.Process(cmd)
return cmd
}
// Master shows the state and info of the specified master.
func (c *SentinelClient) Master(name string) *StringStringMapCmd {
cmd := NewStringStringMapCmd("sentinel", "master", name)
_ = c.Process(cmd)
return cmd
}
// Masters shows a list of monitored masters and their state.
func (c *SentinelClient) Masters() *SliceCmd {
cmd := NewSliceCmd("sentinel", "masters")
_ = c.Process(cmd)
return cmd
}
// Slaves shows a list of slaves for the specified master and their state.
func (c *SentinelClient) Slaves(name string) *SliceCmd {
cmd := NewSliceCmd("sentinel", "slaves", name)
_ = c.Process(cmd)
return cmd
}
// CkQuorum checks if the current Sentinel configuration is able to reach the
// quorum needed to failover a master, and the majority needed to authorize the
// failover. This command should be used in monitoring systems to check if a
// Sentinel deployment is ok.
func (c *SentinelClient) CkQuorum(name string) *StringCmd {
cmd := NewStringCmd("sentinel", "ckquorum", name)
_ = c.Process(cmd)
return cmd
}
// Monitor tells the Sentinel to start monitoring a new master with the specified
// name, ip, port, and quorum.
func (c *SentinelClient) Monitor(name, ip, port, quorum string) *StringCmd {
cmd := NewStringCmd("sentinel", "monitor", name, ip, port, quorum)
_ = c.Process(cmd)
return cmd
}
// Set is used in order to change configuration parameters of a specific master.
func (c *SentinelClient) Set(name, option, value string) *StringCmd {
cmd := NewStringCmd("sentinel", "set", name, option, value)
_ = c.Process(cmd)
return cmd
}
// Remove is used in order to remove the specified master: the master will no
// longer be monitored, and will totally be removed from the internal state of
// the Sentinel.
func (c *SentinelClient) Remove(name string) *StringCmd {
cmd := NewStringCmd("sentinel", "remove", name)
_ = c.Process(cmd)
return cmd
}
type sentinelFailover struct {
sentinelAddrs []string
opt *Options
opt *Options
username string
password string
pool *pool.ConnPool
poolOnce sync.Once
@ -206,19 +307,36 @@ func (c *sentinelFailover) Close() error {
return nil
}
func (c *sentinelFailover) closeSentinel() error {
firstErr := c.pubsub.Close()
c.pubsub = nil
err := c.sentinel.Close()
if err != nil && firstErr == nil {
firstErr = err
}
c.sentinel = nil
return firstErr
}
func (c *sentinelFailover) Pool() *pool.ConnPool {
c.poolOnce.Do(func() {
c.opt.Dialer = c.dial
c.pool = newConnPool(c.opt)
opt := *c.opt
opt.Dialer = c.dial
c.pool = newConnPool(&opt)
})
return c.pool
}
func (c *sentinelFailover) dial() (net.Conn, error) {
func (c *sentinelFailover) dial(ctx context.Context, network, _ string) (net.Conn, error) {
addr, err := c.MasterAddr()
if err != nil {
return nil, err
}
if c.opt.Dialer != nil {
return c.opt.Dialer(ctx, network, addr)
}
return net.DialTimeout("tcp", addr, c.opt.DialTimeout)
}
@ -232,17 +350,35 @@ func (c *sentinelFailover) MasterAddr() (string, error) {
}
func (c *sentinelFailover) masterAddr() (string, error) {
addr := c.getMasterAddr()
if addr != "" {
return addr, nil
c.mu.RLock()
sentinel := c.sentinel
c.mu.RUnlock()
if sentinel != nil {
addr := c.getMasterAddr(sentinel)
if addr != "" {
return addr, nil
}
}
c.mu.Lock()
defer c.mu.Unlock()
if c.sentinel != nil {
addr := c.getMasterAddr(c.sentinel)
if addr != "" {
return addr, nil
}
_ = c.closeSentinel()
}
for i, sentinelAddr := range c.sentinelAddrs {
sentinel := NewSentinelClient(&Options{
Addr: sentinelAddr,
Addr: sentinelAddr,
Dialer: c.opt.Dialer,
Username: c.username,
Password: c.password,
MaxRetries: c.opt.MaxRetries,
@ -260,7 +396,7 @@ func (c *sentinelFailover) masterAddr() (string, error) {
masterAddr, err := sentinel.GetMasterAddrByName(c.masterName).Result()
if err != nil {
internal.Logf("sentinel: GetMasterAddrByName master=%q failed: %s",
internal.Logger.Printf("sentinel: GetMasterAddrByName master=%q failed: %s",
c.masterName, err)
_ = sentinel.Close()
continue
@ -277,27 +413,13 @@ func (c *sentinelFailover) masterAddr() (string, error) {
return "", errors.New("redis: all sentinels are unreachable")
}
func (c *sentinelFailover) getMasterAddr() string {
c.mu.RLock()
sentinel := c.sentinel
c.mu.RUnlock()
if sentinel == nil {
return ""
}
func (c *sentinelFailover) getMasterAddr(sentinel *SentinelClient) string {
addr, err := sentinel.GetMasterAddrByName(c.masterName).Result()
if err != nil {
internal.Logf("sentinel: GetMasterAddrByName name=%q failed: %s",
internal.Logger.Printf("sentinel: GetMasterAddrByName name=%q failed: %s",
c.masterName, err)
c.mu.Lock()
if c.sentinel == sentinel {
c.closeSentinel()
}
c.mu.Unlock()
return ""
}
return net.JoinHostPort(addr[0], addr[1])
}
@ -312,7 +434,11 @@ func (c *sentinelFailover) switchMaster(addr string) {
c.mu.Lock()
defer c.mu.Unlock()
internal.Logf("sentinel: new master=%q addr=%q",
if c._masterAddr == addr {
return
}
internal.Logger.Printf("sentinel: new master=%q addr=%q",
c.masterName, addr)
_ = c.Pool().Filter(func(cn *pool.Conn) bool {
return cn.RemoteAddr().String() != addr
@ -321,35 +447,20 @@ func (c *sentinelFailover) switchMaster(addr string) {
}
func (c *sentinelFailover) setSentinel(sentinel *SentinelClient) {
c.discoverSentinels(sentinel)
if c.sentinel != nil {
panic("not reached")
}
c.sentinel = sentinel
c.discoverSentinels()
c.pubsub = sentinel.Subscribe("+switch-master")
go c.listen(c.pubsub)
}
func (c *sentinelFailover) closeSentinel() error {
var firstErr error
err := c.pubsub.Close()
if err != nil && firstErr == err {
firstErr = err
}
c.pubsub = nil
err = c.sentinel.Close()
if err != nil && firstErr == err {
firstErr = err
}
c.sentinel = nil
return firstErr
}
func (c *sentinelFailover) discoverSentinels(sentinel *SentinelClient) {
sentinels, err := sentinel.Sentinels(c.masterName).Result()
func (c *sentinelFailover) discoverSentinels() {
sentinels, err := c.sentinel.Sentinels(c.masterName).Result()
if err != nil {
internal.Logf("sentinel: Sentinels master=%q failed: %s", c.masterName, err)
internal.Logger.Printf("sentinel: Sentinels master=%q failed: %s", c.masterName, err)
return
}
for _, sentinel := range sentinels {
@ -359,7 +470,7 @@ func (c *sentinelFailover) discoverSentinels(sentinel *SentinelClient) {
if key == "name" {
sentinelAddr := vals[i+1].(string)
if !contains(c.sentinelAddrs, sentinelAddr) {
internal.Logf("sentinel: discovered new sentinel=%q for master=%q",
internal.Logger.Printf("sentinel: discovered new sentinel=%q for master=%q",
sentinelAddr, c.masterName)
c.sentinelAddrs = append(c.sentinelAddrs, sentinelAddr)
}
@ -376,11 +487,10 @@ func (c *sentinelFailover) listen(pubsub *PubSub) {
break
}
switch msg.Channel {
case "+switch-master":
if msg.Channel == "+switch-master" {
parts := strings.Split(msg.Payload, " ")
if parts[0] != c.masterName {
internal.Logf("sentinel: ignore addr for master=%q", parts[0])
internal.Logger.Printf("sentinel: ignore addr for master=%q", parts[0])
continue
}
addr := net.JoinHostPort(parts[3], parts[4])

View File

@ -1,8 +1,10 @@
package redis
import (
"github.com/go-redis/redis/internal/pool"
"github.com/go-redis/redis/internal/proto"
"context"
"github.com/go-redis/redis/v7/internal/pool"
"github.com/go-redis/redis/v7/internal/proto"
)
// TxFailedErr transaction redis failed.
@ -13,28 +15,64 @@ const TxFailedErr = proto.RedisError("redis: transaction failed")
// by multiple goroutines, because Exec resets list of watched keys.
// If you don't need WATCH it is better to use Pipeline.
type Tx struct {
statefulCmdable
baseClient
cmdable
statefulCmdable
hooks
ctx context.Context
}
func (c *Client) newTx() *Tx {
func (c *Client) newTx(ctx context.Context) *Tx {
tx := Tx{
baseClient: baseClient{
opt: c.opt,
connPool: pool.NewStickyConnPool(c.connPool.(*pool.ConnPool), true),
},
hooks: c.hooks.clone(),
ctx: ctx,
}
tx.baseClient.init()
tx.statefulCmdable.setProcessor(tx.Process)
tx.init()
return &tx
}
func (c *Tx) init() {
c.cmdable = c.Process
c.statefulCmdable = c.Process
}
func (c *Tx) Context() context.Context {
return c.ctx
}
func (c *Tx) WithContext(ctx context.Context) *Tx {
if ctx == nil {
panic("nil context")
}
clone := *c
clone.init()
clone.hooks.lock()
clone.ctx = ctx
return &clone
}
func (c *Tx) Process(cmd Cmder) error {
return c.ProcessContext(c.ctx, cmd)
}
func (c *Tx) ProcessContext(ctx context.Context, cmd Cmder) error {
return c.hooks.process(ctx, cmd, c.baseClient.process)
}
// Watch prepares a transaction and marks the keys to be watched
// for conditional execution if there are any keys.
//
// The transaction is automatically closed when fn exits.
func (c *Client) Watch(fn func(*Tx) error, keys ...string) error {
tx := c.newTx()
return c.WatchContext(c.ctx, fn, keys...)
}
func (c *Client) WatchContext(ctx context.Context, fn func(*Tx) error, keys ...string) error {
tx := c.newTx(ctx)
if len(keys) > 0 {
if err := tx.Watch(keys...).Err(); err != nil {
_ = tx.Close()
@ -62,7 +100,7 @@ func (c *Tx) Watch(keys ...string) *StatusCmd {
args[1+i] = key
}
cmd := NewStatusCmd(args...)
c.Process(cmd)
_ = c.Process(cmd)
return cmd
}
@ -74,20 +112,29 @@ func (c *Tx) Unwatch(keys ...string) *StatusCmd {
args[1+i] = key
}
cmd := NewStatusCmd(args...)
c.Process(cmd)
_ = c.Process(cmd)
return cmd
}
// Pipeline creates a new pipeline. It is more convenient to use Pipelined.
// Pipeline creates a pipeline. Usually it is more convenient to use Pipelined.
func (c *Tx) Pipeline() Pipeliner {
pipe := Pipeline{
exec: c.processTxPipeline,
ctx: c.ctx,
exec: func(ctx context.Context, cmds []Cmder) error {
return c.hooks.processPipeline(ctx, cmds, c.baseClient.processPipeline)
},
}
pipe.statefulCmdable.setProcessor(pipe.Process)
pipe.init()
return &pipe
}
// Pipelined executes commands queued in the fn in a transaction.
// Pipelined executes commands queued in the fn outside of the transaction.
// Use TxPipelined if you need transactional behavior.
func (c *Tx) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.Pipeline().Pipelined(fn)
}
// TxPipelined executes commands queued in the fn in the transaction.
//
// When using WATCH, EXEC will execute commands only if the watched keys
// were not modified, allowing for a check-and-set mechanism.
@ -95,16 +142,18 @@ func (c *Tx) Pipeline() Pipeliner {
// Exec always returns list of commands. If transaction fails
// TxFailedErr is returned. Otherwise Exec returns an error of the first
// failed command or nil.
func (c *Tx) Pipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.Pipeline().Pipelined(fn)
}
// TxPipelined is an alias for Pipelined.
func (c *Tx) TxPipelined(fn func(Pipeliner) error) ([]Cmder, error) {
return c.Pipelined(fn)
return c.TxPipeline().Pipelined(fn)
}
// TxPipeline is an alias for Pipeline.
// TxPipeline creates a pipeline. Usually it is more convenient to use TxPipelined.
func (c *Tx) TxPipeline() Pipeliner {
return c.Pipeline()
pipe := Pipeline{
ctx: c.ctx,
exec: func(ctx context.Context, cmds []Cmder) error {
return c.hooks.processTxPipeline(ctx, cmds, c.baseClient.processTxPipeline)
},
}
pipe.init()
return &pipe
}

View File

@ -1,7 +1,9 @@
package redis
import (
"context"
"crypto/tls"
"net"
"time"
)
@ -18,7 +20,9 @@ type UniversalOptions struct {
// Common options.
Dialer func(ctx context.Context, network, addr string) (net.Conn, error)
OnConnect func(*Conn) error
Username string
Password string
MaxRetries int
MinRetryBackoff time.Duration
@ -46,15 +50,18 @@ type UniversalOptions struct {
MasterName string
}
func (o *UniversalOptions) cluster() *ClusterOptions {
// Cluster returns cluster options created from the universal options.
func (o *UniversalOptions) Cluster() *ClusterOptions {
if len(o.Addrs) == 0 {
o.Addrs = []string{"127.0.0.1:6379"}
}
return &ClusterOptions{
Addrs: o.Addrs,
Dialer: o.Dialer,
OnConnect: o.OnConnect,
Username: o.Username,
Password: o.Password,
MaxRedirects: o.MaxRedirects,
@ -80,7 +87,8 @@ func (o *UniversalOptions) cluster() *ClusterOptions {
}
}
func (o *UniversalOptions) failover() *FailoverOptions {
// Failover returns failover options created from the universal options.
func (o *UniversalOptions) Failover() *FailoverOptions {
if len(o.Addrs) == 0 {
o.Addrs = []string{"127.0.0.1:26379"}
}
@ -88,9 +96,12 @@ func (o *UniversalOptions) failover() *FailoverOptions {
return &FailoverOptions{
SentinelAddrs: o.Addrs,
MasterName: o.MasterName,
OnConnect: o.OnConnect,
Dialer: o.Dialer,
OnConnect: o.OnConnect,
DB: o.DB,
Username: o.Username,
Password: o.Password,
MaxRetries: o.MaxRetries,
@ -112,7 +123,8 @@ func (o *UniversalOptions) failover() *FailoverOptions {
}
}
func (o *UniversalOptions) simple() *Options {
// Simple returns basic options created from the universal options.
func (o *UniversalOptions) Simple() *Options {
addr := "127.0.0.1:6379"
if len(o.Addrs) > 0 {
addr = o.Addrs[0]
@ -120,9 +132,11 @@ func (o *UniversalOptions) simple() *Options {
return &Options{
Addr: addr,
Dialer: o.Dialer,
OnConnect: o.OnConnect,
DB: o.DB,
Username: o.Username,
Password: o.Password,
MaxRetries: o.MaxRetries,
@ -147,14 +161,18 @@ func (o *UniversalOptions) simple() *Options {
// --------------------------------------------------------------------
// UniversalClient is an abstract client which - based on the provided options -
// can connect to either clusters, or sentinel-backed failover instances or simple
// single-instance servers. This can be useful for testing cluster-specific
// applications locally.
// can connect to either clusters, or sentinel-backed failover instances
// or simple single-instance servers. This can be useful for testing
// cluster-specific applications locally.
type UniversalClient interface {
Cmdable
Context() context.Context
AddHook(Hook)
Watch(fn func(*Tx) error, keys ...string) error
Do(args ...interface{}) *Cmd
DoContext(ctx context.Context, args ...interface{}) *Cmd
Process(cmd Cmder) error
WrapProcess(fn func(oldProcess func(cmd Cmder) error) func(cmd Cmder) error)
ProcessContext(ctx context.Context, cmd Cmder) error
Subscribe(channels ...string) *PubSub
PSubscribe(channels ...string) *PubSub
Close() error
@ -162,6 +180,7 @@ type UniversalClient interface {
var _ UniversalClient = (*Client)(nil)
var _ UniversalClient = (*ClusterClient)(nil)
var _ UniversalClient = (*Ring)(nil)
// NewUniversalClient returns a new multi client. The type of client returned depends
// on the following three conditions:
@ -171,9 +190,9 @@ var _ UniversalClient = (*ClusterClient)(nil)
// 3. otherwise, a single-node redis Client will be returned.
func NewUniversalClient(opts *UniversalOptions) UniversalClient {
if opts.MasterName != "" {
return NewFailoverClient(opts.failover())
return NewFailoverClient(opts.Failover())
} else if len(opts.Addrs) > 1 {
return NewClusterClient(opts.cluster())
return NewClusterClient(opts.Cluster())
}
return NewClient(opts.simple())
return NewClient(opts.Simple())
}

19
vendor/modules.txt vendored
View File

@ -14,7 +14,6 @@ gitea.com/macaron/binding
## explicit
gitea.com/macaron/cache
gitea.com/macaron/cache/memcache
gitea.com/macaron/cache/redis
# gitea.com/macaron/captcha v0.0.0-20190822015246-daa973478bae
## explicit
gitea.com/macaron/captcha
@ -44,7 +43,6 @@ gitea.com/macaron/session/memcache
gitea.com/macaron/session/mysql
gitea.com/macaron/session/nodb
gitea.com/macaron/session/postgres
gitea.com/macaron/session/redis
# gitea.com/macaron/toolbox v0.0.0-20190822013122-05ff0fc766b7
## explicit
gitea.com/macaron/toolbox
@ -347,15 +345,15 @@ github.com/go-openapi/strfmt
github.com/go-openapi/swag
# github.com/go-openapi/validate v0.19.10
github.com/go-openapi/validate
# github.com/go-redis/redis v6.15.2+incompatible
# github.com/go-redis/redis/v7 v7.4.0
## explicit
github.com/go-redis/redis
github.com/go-redis/redis/internal
github.com/go-redis/redis/internal/consistenthash
github.com/go-redis/redis/internal/hashtag
github.com/go-redis/redis/internal/pool
github.com/go-redis/redis/internal/proto
github.com/go-redis/redis/internal/util
github.com/go-redis/redis/v7
github.com/go-redis/redis/v7/internal
github.com/go-redis/redis/v7/internal/consistenthash
github.com/go-redis/redis/v7/internal/hashtag
github.com/go-redis/redis/v7/internal/pool
github.com/go-redis/redis/v7/internal/proto
github.com/go-redis/redis/v7/internal/util
# github.com/go-sql-driver/mysql v1.5.0
## explicit
github.com/go-sql-driver/mysql
@ -692,6 +690,7 @@ github.com/stretchr/testify/require
# github.com/subosito/gotenv v1.2.0
github.com/subosito/gotenv
# github.com/syndtr/goleveldb v1.0.0
## explicit
github.com/syndtr/goleveldb/leveldb
github.com/syndtr/goleveldb/leveldb/cache
github.com/syndtr/goleveldb/leveldb/comparer