Tuesday, March 15, 2016

How to prevent / handle ErrBadConn with Azure SQL Database

Leave a Comment

I'm using this driver: https://github.com/denisenkom/go-mssqldb and on production with an Azure SQL Database Standard S3 level we are getting way too much ErrBadconn - driver: Bad connection returned.

How can I prevent or at least gracefully handle that. Here's some code to show how things are setup.

A typical database function call

package dal

var db *sql.DB  type Database struct{}  func (d Database) Open() {     newDB, err := sql.Open("mssql", os.Getenv("dbconnestion"))     if err != nil {         panic(err)     }      err = newDB.Ping()     if err != nil {         panic(err)     }      db = newDB }  func (d Database) Close() {     db.Close() } // ... in another file func (e *Entities) Add(entity Entity) (int64, error) {     stmt, err := db.Prepare("INSERT INTO Entities VALUES(?, ?)")     if err != nil {         return -1, err     }     defer stmt.Close()      result, err := stmt.Exec(entity.Field1, entity.Field2)      if err != nil {         return -1, err     }      return result.LastInsertId() } 

On a web api

func main() {   db := dal.Database{}   db.Open()   defer db.Close()    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {      entities := &dal.Entites{}      id, err := entities.Add(dal.Entity{Field1: "a", Field2: "b"})      if err != nil {        // here all across my web api and other web package or cli cmd that uses the dal I'm getting random ErrBadConn      }   }) } 

So in short, the dal package is shared across multiple Azure web apps and command line Go apps.

I cannot see a pattern, those errors, which are frequent and randomly occurring. We are using Bugsnag to log the errors from all our apps.

For completion, sometimes our Standard S3 limit of 200 concurrent connections is reached.

I've triple checked everywhere on the package that access the database, making sure that all sql.Rows were closed, all db.Prepare statement are closed. As and example here's how a typical query function looks like:

func (e *Entities) GetByID(id int64) ([]Entity, error) {     rows, err := db.Query("SELECT * FROM Entities WHERE ID = ?", id)     if err != nil {         return nil, err     }     defer rows.Close()      var results []Entity     for rows.Next() {         var r Entity         err := readEntity(rows, &r)         if err != nil {             return nil, err         }         results = append(results, r)     }      if err = rows.Err(); err != nil {         return nil, err     }      return results, nil } 

The readEntity is basically only doing Scan on the fields.

I don't think it's code related, unit tests run well locally. It's just once deployed to Azure after running for sometimes, the driver: Bad connection start to show up very frequently.

I've ran this query to try and see as suggested in this question: Azure SQL server max pool size was reached error

select * from sys.dm_exeC_requests

But I'm not exactly sure what should I be paying attention here.

Things I've did / made sure of.

  1. As it's suggested, the database/sql should handle the connection pool, so having a global variable for the database connection should be fine.

  2. Making sure sql.Rows and db.Prepare statement are closed everywhere.

  3. Increased the Azure SQL level to S3.

  4. There's an issue for the sql driver I'm using talking about Azure SQL making database connection is a bad state if they are idling for more thant 2 minutes. https://github.com/denisenkom/go-mssqldb/issues/81

Does the way database/sql handle the connection pooling is in any way not working with the way Azure SQL Database are manage.

Is there a way to gracefully handle this? I know that C# / Entity Framework have a connection resiliency / retry logic for Azure SQL, is it for the similar reasons? How could I implement this without having to pass everywhere on my error handling? I mean I don't want to do something like this clearly:

if err == sql.ErrBadConn {   // close and re-open the global db object   // retry } 

This is certainly not my only option here?

Any help would be extremely welcome. Thank you

1 Answers

Answers 1

I'm not seeing anywhere that you close your database. Best practice (in other languages - not positive about Go) is to close / deallocate / dereference the database object after use, to release the connection back into the pool. If you're running out of connection resources, you're being told that you need to release things. Holding the reference open means nobody else can use that connection, so it'll stay around until it gets recycled because it's timed out. This is why you're getting it intermittently, rather than consistently - it depends on having a certain number of connections taking place w/in a certain period of time.

If You Enjoyed This, Take 5 Seconds To Share It

0 comments:

Post a Comment