Selectx
worksrows.Next()
early*Row
Reference
Selectx
and SelectRowx
is sq's way of mapping SELECT results into Go structs. Returningx
and ReturningRowx
serve the same purpose but for INSERT, UPDATE and DELETE (note that RETURNING
is a postgres-only feature).
Usefully, Selectx
and SelectRowx
double as the SELECT clause in a query. That means if you want to put a column in the SELECT clause, you must map it to a Go variable. What does it mean to 'map a column to a Go variable'? Just simple assignment:
var user User
u := tables.USERS().As("u")
err := sq.From(u).
Where(u.NAME.EqString("bob")).
SelectRowx(func(row *sq.Row) { // <-- mapper function
user.UserID = row.Int(u.USER_ID)
user.Name = row.String(u.NAME)
user.Email = row.String(u.EMAIL)
user.CreatedAt = row.Time(u.CREATED_AT)
}).
Fetch(DB)
The above query is equivalent to
SELECT u.user_id, u.name, u.email, u.created_at
FROM public.users AS u
WHERE u.name = 'bob'
Except it also maps the results back to the User
struct in one fell swoop. For a full list of methods you can call on *sq.Row
, see *Row
Reference.
SelectRowx
is used to map the one row. Selectx
is used to map many rows. For Selectx
, you must additionally pass in an accumulator function which accumulates the result after each call to the mapper function.
var user User
var users []User
u := tables.USERS().As("u")
err := sq.From(u).
Where(u.NAME.EqString("bob")).
Selectx(func(row *sq.Row) { // <-- mapper function
user.UserID = row.Int(u.USER_ID)
user.Name = row.String(u.NAME)
user.Email = row.String(u.EMAIL)
user.CreatedAt = row.Time(u.CREATED_AT)
}, func() { // <-- accumulator function
users = append(users, user)
}).
Fetch(DB)
ReturningRowx
and Returningx
work in a similar way, except they apply to INSERT, UPDATE and DELETE queries.
If you use SelectRowx
or ReturningRowx
and no rows are present, Fetch()
will return sql.ErrNoRows
.
It is important that you do not put any accumulator code inside the mapper function. The mapper function should run the same every time it is called, do not use any variable inside that saves state between calls and affects the flow of code.
Prerequisite: you know how to do the traditional database/sql DB.Query(query, args...)
→ row.Scan(&a, &b, &c...)
way of scanning.
Before the query is run, sq will run your mapper function that you passed to Selectx
with an empty *Row
object.
This is why you should avoid anything stateful in the mapper: the initial dummy call to the mapper function may end up doing some undesired side effect.
The empty *Row
object is responsible for collecting the columns you are interested in, but more importantly it also collects the data types that you want the columns to be scanned into.
Basically there are two phases:
When you call row.Int(u.USER_ID)
, row
collects two things -- the column u.USER_ID
, but also the data type it should be scanned into: sql.NullInt64
.
It adds them to two separate slices, the fields slice and the dest slice respectively.
The example below illustrates how the row builds up the fields slice in conjuction with the dest slice:
Before query:
// *sq.Row before mapper:
// fields: []
// dest: []
// mapper function
func(row *sq.Row) {
// added
// ↓
// fields: [u.USER_ID ]
// dest: [*sql.NullInt64]
user.UserID = row.Int(u.USER_ID)
// added
// ↓
// fields: [u.USER_ID, u.NAME ]
// dest: [*sql.NullInt64, *sql.NullString]
user.Name = row.String(u.NAME)
// added
// ↓
// fields: [u.USER_ID, u.NAME, u.EMAIL ]
// dest: [*sql.NullInt64, *sql.NullString, *sql.NullString]
user.Email = row.String(u.EMAIL)
// added
// ↓
// fields: [u.USER_ID, u.NAME, u.EMAIL, u.CREATED_AT ]
// dest: [*sql.NullInt64, *sql.NullString, *sql.NullString, *sql.NullTime]
user.CreatedAt = row.Time(u.CREATED_AT)
}
// *sq.Row after mapper:
// fields: [u.USER_ID, u.NAME, u.EMAIL, u.CREATED_AT]
// dest: [*sql.NullInt64, *sql.NullString, *sql.NullString, *sql.NullTime]
The fields are inserted into the SELECT query and the query then is run, then the results are scanned back into the dest slice. Because the dest slice was built in conjuction with the fields slice, their number of elements will always match up, as will their types. The eliminates the common problem of adding a column in the SELECT query but forgetting to add a corresponding variable to scan it into.
After *Row.dest
is populated with scan results, it is fed back to the mapper function. This time it is non-empty and will actually yield the scan value when asked. An internal index keeps track of which column is currently being assigned, and is incremented after each assignment:
After query:
// Results are scanned into dest with something like this.
// Code is not exactly accurate, but it is the general idea.
// fields
// ┌──────────────────────────────────────┐
// rows, err := DB.Query(SELECT u.USER_ID, u.NAME, u.EMAIL, u.CREATED_AT FROM u WHERE u.NAME = 'bob')
// for rows.Next() {
// rows.Scan(*sql.NullInt64, *sql.NullString, *sql.NullString, *sql.NullTime)
// └─────────────────────────────────────────────────────────────┘
// dest
// mapper(rows)
// accumulator()
// }
// mapper function
func(row *sq.Row) {
// index
// ↓
// dest: [*sql.NullInt64, *sql.NullString, *sql.NullString, *sql.NullTime]
user.UserID = row.Int(u.USER_ID)
// index
// ↓
// dest: [*sql.NullInt64, *sql.NullString, *sql.NullString, *sql.NullTime]
user.Name = row.String(u.NAME)
// index
// ↓
// dest: [*sql.NullInt64, *sql.NullString, *sql.NullString, *sql.NullTime]
user.Email = row.String(u.EMAIL)
// index
// ↓
// dest: [*sql.NullInt64, *sql.NullString, *sql.NullString, *sql.NullTime]
user.CreatedAt = row.Time(u.CREATED_AT)
}
After the mapper function completes an iteration, the user struct is populated with the row results. The accumulator function is then called, adding a copy of the populated user to the list of users:
// accumulator function
func() {
users = append(users, user)
}
The user variable is then re-mapped using the next row, the accumulator called, etc until all database rows are iterated.
Typically using Selectx
will iterate through all rows of the query by calling rows.Next()
repeatedly to completion.
However if you wish to break the loop early for whatever reason, you can raise a panic in the accumulator function with the constant sq.ExitPeacefully
. Only then will the loop terminate without error.
err := From(tbl).
SelectRowx(func(row *sq.Row) {
/* ...mapper code... */
}, func() {
if someCondition {
panic(sq.ExitPeacefully)
}
}).
Fetch(DB)
// err is nil
If you panic with a particular error, sq will terminate the loop and return that particular error. Any other panic type will be printed into a string and returned as a generic error.
Null values are implicitly converted to their zero values when you use row.Int, row.String, row.Bool, row.Time
:
// using only primitive values, not sql.NullXXX
var userID int
var name, email string
var createdAt time.Time
u := tables.USERS().As("u")
err := sq.From(u).
Where(u.NAME.EqString("bob")).
SelectRowx(func(row *sq.Row) {
// If any of these columns are NULL, you get the zero value
userID = row.Int(u.USER_ID)
name = row.String(u.NAME)
email = row.String(u.EMAIL)
createdAt = row.Time(u.CREATED_AT)
}).
Fetch(DB)
If you want to actually know if a column is NULL, you can use row.IntValid, row.StringValid, row.BoolValid, row.TimeValid
instead.
It returns a boolean indicating if the column is NULL:
// indicates whether each column is NULL
var userIDValid, nameValid, emailValid, createdAtValid bool
u := tables.USERS().As("u")
err := sq.From(u).
Where(u.NAME.EqString("bob")).
SelectRowx(func(row *sq.Row) {
userIDValid = row.IntValid(u.USER_ID)
nameValid = row.StringValid(u.NAME)
emailValid = row.StringValid(u.EMAIL)
createdAtValid = row.TimeValid(u.CREATED_AT)
}).
Fetch(DB)
Of course, you can still use sql.NullXXX if you want:
var userID sql.NullInt64
var name, email sql.NullString
var createdAt sql.NullTime
u := tables.USERS().As("u")
err := sq.From(u).
Where(u.NAME.EqString("bob")).
SelectRowx(func(row *sq.Row) {
userID = row.NullInt64(u.USER_ID)
name = row.NullString(u.NAME)
email = row.NullString(u.EMAIL)
createdAt = row.NullTime(u.CREATED_AT)
}).
Fetch(DB)
For a full list of available methods on *Row
, see *Row
Reference.
If you want to scan any other type not mentioned in the reference, you can use row.ScanInto instead.
If you find yourself writing the same mapper code over and over, good news. The mapper is just a function, and you can make it a named function and use that everywhere. A useful pattern is to make the mapper a method on the struct pointer receiver:
// RowMapper encapsulates the logic needed to map a row to a *User struct. It's
// a good practice to take in the table argument needed to populate the struct,
// in this case the tables.TABLE_USERS struct. This is so that the table(s)
// that the RowMapper needs are made explicit in the function signature.
func (user *User) RowMapper(u tables.TABLE_USERS) func(*sq.Row) {
return func(row *sq.Row) {
user.UserID = row.Int(u.USER_ID)
user.Name = row.String(u.NAME)
user.Email = row.String(u.EMAIL)
user.CreatedAt = row.Time(u.CREATED_AT)
}
}
u := tables.USERS()
user := &User{} // <-- declare a *User, and make sure to initialize it!
var users User
// This query is now 4 lines because the code needed to SELECT and map columns
// is abstracted away into its own function. The query only has to filter and
// join the necessary tables.
err := sq.From(u).
Where(u.NAME.EqString("bob")).
Selectx(user.RowMapper(u), func() { users = append(users, *user) }).
Fetch(DB)
The *Row
methods are strict in what kind of Fields they accept: row.Int
only accepts NumberFields, row.String
only accepts StringFields, row.Time
only accepts TimeFields etc.
If you want to pass in a custom SQL expression (made with Fieldf
) like:
// SQL
COALESCE(u.score, MAX(u.overall_score, 50))
// Go
Fieldf("COALESCE(?, MAX(?, 50))", u.SCORE, u.OVERALL_SCORE)
You need to use row.ScanInto
. The following lines below do the same thing:
user.UserID = row.Int(u.USER_ID)
/* is similar to doing */
row.ScanInto(&user.UserID, u.USER_ID)
// note: the line above will choke on NULL values because it's scanning
// directly into an int
The difference is that row.ScanInto
accepts any Field. So you can pass any custom SQL expression into it:
// Equivalent Query:
// SELECT COALESCE(u.score, MAX(u.overall_score, 50)) FROM public.users AS u
u := tables.USERS().As("u")
err := sq.From(u).
SelectRowx(func(row *sq.Row) {
row.ScanInto(&score, Fieldf("COALESCE(?, MAX(?, 50))", u.SCORE, u.OVERALL_SCORE))
}).
Fetch(DB)
This is how sql.Scanner types can be scanned as well.
Simply pass in a pointer to your custom type implementing sql.Scanner, as well as the Field you are scanning from.
row.ScanInto(&customType, tbl.CUSTOM_FIELD)
// Note: these links lead to the Postgres version of the godoc package.
// Everything is the same for the MySQL version except it lacks ScanArray.
type *Row struct {
ScanInto(dest interface{}, field Field)
// only available for postgres, see Array
ScanArray(slice interface{}, field Field)
// bool
// note that BooleanFields are also Predicates, so they can be used here
Bool(predicate Predicate) bool
BoolValid(predicate Predicate) bool
NullBool(predicate Predicate)
// float64
Float64(field NumberField) float64
Float64Valid(field NumberField) bool
NullFloat64(field NumberField) sql.NullFloat64
// int
Int(field NumberField) int
IntValid(field NumberField) bool
// int64
Int64(field NumberField) int64
Int64Valid(field NumberField) bool
NullInt64(field NumberField) sql.NullInt64
// string
String(field StringField) string
StringValid(field StringField) bool
NullString(field StringField) sql.NullString
// time.Time
Time(field TimeField) time.Time
TimeValid(field TimeField) bool
NullTime(field TimeField) sql.NullTime
}