The uncastable through reflection check should be testing for foreign
key type overrides on the join model and not a non-integer type on the
through primary key.
Per [this discussion][arel-discussion] on the discourse forum, this is
an addition to Arel for supporting `@>` (contains) and `&&` (overlaps)
operators in PostgreSQL. They are useful for GIN-indexed data such as a
`jsonb` or array column.
[arel-discussion]: https://discuss.rubyonrails.org/t/what-has-happened-to-arel/74383/51
`ReadOnlyTest#test_field_named_field` performs implicit commit the transaction by `ReadOnlyTest#setup`
because of the MySQL database behavior.
This commit addresses the failure at https://buildkite.com/rails/rails/builds/68962#68213887-1cef-4f76-9c95-aebc8799c806
Here are minimum steps to reproduce:
```ruby
% ARCONN=mysql2 bin/test test/cases/readonly_test.rb test/cases/dirty_test.rb test/cases/associations/inner_join_association_test.rb \
-n "/^(?:ReadOnlyTest#(?:test_has_many_with_through_is_not_implicitly_marked_readonly)|DirtyTest#(?:test_field_named_field)|InnerJoinAssociationTest#(?:test_eager_load_with_string_joins))$/" --seed 50855
Using mysql2
Run options: -n "/^(?:ReadOnlyTest#(?:test_has_many_with_through_is_not_implicitly_marked_readonly)|DirtyTest#(?:test_field_named_field)|InnerJoinAssociationTest#(?:test_eager_load_with_string_joins))$/" --seed 50855
..F
Failure:
InnerJoinAssociationTest#test_eager_load_with_string_joins [/Users/yahonda/src/github.com/rails/rails/activerecord/test/cases/associations/inner_join_association_test.rb:87]:
Expected: 3
Actual: 4
bin/test test/cases/associations/inner_join_association_test.rb:82
Finished in 0.114674s, 26.1611 runs/s, 26.1611 assertions/s.
3 runs, 3 assertions, 1 failures, 0 errors, 0 skips
```
References:
- "13.3.3 Statements That Cause an Implicit Commit"
https://dev.mysql.com/doc/refman/8.0/en/implicit-commit.html
datetime with precision was passing assert_no_microsecond_precision
unintentionally, because `/\d\z/` is match to both datetime with
precision and datetime without precision. Fixed that by changing
time and regex to make it easier to grasp with and without precision.
Fix stub_version to consider schema_cache. ref: #35795
Remove failing test for unsupported version of MariaDB. ref: fb6743a
Before IN clause optimization 70ddb8a, Active Record had generated an
SQL with binds when `prepared_statements: true`:
```ruby
# prepared_statements: true
#
# SELECT `authors`.* FROM `authors` WHERE `authors`.`id` IN (?, ?, ?)
#
# prepared_statements: false
#
# SELECT `authors`.* FROM `authors` WHERE `authors`.`id` IN (1, 2, 3)
#
Author.where(id: [1, 2, 3]).to_a
```
But now, binds in IN clause is substituted regardless of whether
`prepared_statements: true` or not:
```ruby
# prepared_statements: true
#
# SELECT `authors`.* FROM `authors` WHERE `authors`.`id`IN (1,2,3)
#
# prepared_statements: false
#
# SELECT `authors`.* FROM `authors` WHERE `authors`.`id`IN (1,2,3)
#
Author.where(id: [1, 2, 3]).to_a
```
I suppose that is considered as a regression for the context:
> While I would prefer that we fix/avoid the too-many-parameters
problem, but I don't like the idea of globally ditching bind params for
this edge case... we're getting to the point where I'd almost consider
anything that doesn't use a bind to be a bug.
https://github.com/rails/rails/pull/33844#issuecomment-421000003
This makes binds consider whether `prepared_statements: true` or not
(i.e. restore the original behavior as before), but still gain that
optimization when need the substitute binds (`prepared_statements: false`,
`relation.to_sql`). Even when `prepared_statements: true`, it still
much faster than before by optimized (bind node less) binds generation.
```ruby
class Post < ActiveRecord::Base
end
ids = (1..1000).each.map do |n|
Post.create!.id
end
puts "prepared_statements: #{Post.connection.prepared_statements.inspect}"
Benchmark.ips do |x|
x.report("where with ids") do
Post.where(id: ids).to_a
end
end
```
* Before (200058b0113efc7158432484d71c1a4f1484a4a1)
`prepared_statements: true`:
```
Warming up --------------------------------------
where with ids 6.000 i/100ms
Calculating -------------------------------------
where with ids 63.806 (± 7.8%) i/s - 318.000 in 5.015903s
```
`prepared_statements: false`:
```
Warming up --------------------------------------
where with ids 7.000 i/100ms
Calculating -------------------------------------
where with ids 73.550 (± 8.2%) i/s - 371.000 in 5.085672s
```
* Now with this change
`prepared_statements: true`:
```
Warming up --------------------------------------
where with ids 9.000 i/100ms
Calculating -------------------------------------
where with ids 91.992 (± 7.6%) i/s - 459.000 in 5.020817s
```
`prepared_statements: false`:
```
Warming up --------------------------------------
where with ids 10.000 i/100ms
Calculating -------------------------------------
where with ids 104.335 (± 8.6%) i/s - 520.000 in 5.026425s
```
Follow up of #34122.
Relation method call is relying on method_missing, but if `Kernel` has
the same named method (e.g. `open`, etc), it will invoke Kernel's method
since method_missing is not happened.
To prevent that, eager generate relation methods if a method is the same
name on `Kernel`.
Fixes#39195.
We fixed `generate_relation_method` to address kwargs warnings at
#38038, but I missed generated named scopes also need the same fix.
Test case has picked from #39196.
Co-authored-by: John Hawthorn <john@hawthorn.email>
Actually that result is odd and hard to predictable result to me, but we
should not change the public behavior without deprecation cycle.
I had not intended to break any apps, so I've restored the behavior.
Fixes#39171.
Follow-up to #39147 and #39168.
By adding a new purpose-specific format, we avoid potential pitfalls
from concatenating format strings. We also save a String allocation per
Time attribute per inspect.
The new format also includes a time zone offset for more introspective
inspection.
In the AR test suite require_dependency does not make much sense. Just
call vanilla require/load.
Note that in the test that made explicit use of it, there are no
autoload paths, and no constants have been autoloaded. In reality, the
code ended up calling Kernel#load.
SQLite3 does not recognize paths as file URIs unless the
`SQLite3::Constants::Open::URI` flag is set. Therefore, without this
flag, a path like "file::memory:" is interpreted as a filename, causing
a "file::memory:" file to be created and used as the database. Most
tests in `SQLite3TransactionTest` picked up this flag from
`shared_cache_flags`, but a few did not. Because those tests were
creating a file, the path was changed in #38620 such that it no longer
pointed to an in-memory database.
This commit restores the database path as "file::memory:" and ensures
the URI flag is set whenever `in_memory_db?` is true.
This reverts commit 9817d74f3be72d8e685301bfd0acb6a12b9cdda9, reversing
changes made to d326b029e0d3cd649d80a484ceb5138475d3601d.
Just making this easier to merge our PR in. Otherwise there's tons of
conflicts and our PR is faster.
Some commits adds `**` to address kwargs warnings in Ruby 2.7, but
`save` and `save!` are originally doesn't take positional arguments, so
maintain both `*` and `**` is redundant.
6d68bb5f6909d7ce797551a7422c9f
```ruby
steve = Person.find_by(name: "Steve")
david = Author.find_by(name: "David")
relation = Essay.where(writer: steve)
# Before
relation.rewhere(writer: david).to_a # => []
# After
relation.rewhere(writer: david).to_a # => [david]
```
For now `rewhere` only works for truly column names, doesn't work for
alias attributes, nested conditions, associations.
To fix that, need to build new where clause first, and then get
attribute names from new where clause.
To change a NOT NULL constraint `reversible`.
When changing a NOT NULL constraint, we use `ActiveRecord::ConnectionAdapters::SchemaStatements#change` method that is not reversible, so `up` and `down` methods were required. Actually, we can use `change_column_null` method if only one constraint changed, but if we want to change multiple constarints with ALTER QUERY, `up` and `down` methods were required.
Example failure: https://buildkite.com/rails/rails/builds/68661#84f8790a-fc9e-42ef-a7fb-5bd15a489de8/1002-1012
The failing `destroyed_by_association` tests create an author (a
DestroyByParentAuthor) and a book (a DestroyByParentBook) that belongs
to that author. If the database already contains books that refer to
that author's ID from previous tests (i.e. tests that disabled
`use_transactional_tests`), then one of those books will be loaded and
destroyed instead of the intended DestroyByParentBook book.
By loading the `:books` fixtures, we ensure the database does not
contain such unexpected books.
Co-authored-by: Eugene Kenny <elkenny@gmail.com>
Co-authored-by: Ryuta Kamizono <kamipo@gmail.com>
Since 901d62c586c20ab38b0f18f4bd9a4419902768c4, associations can only be
autosaved once: after a record has been saved, `@new_record_before_save`
will always be false. This assumes that records only transition to being
persisted once, but there are two cases where it happens multiple times:
when the transaction that saved the record is rolled back, and when the
persisted record is later duplicated.
I supposed all aggregation functions will return numeric result in
#39039, but that assumption was incorrect for `minimum` and `maximum`,
if an aggregated column is non numeric type.
I've restored type casting aggregated result for `minimum` and `maximum`.
Fixes#39110.
The type information for type casting is entirely separated to type
object, so if anyone does passing a column to `type_cast` in Rails 6,
they are likely doing something wrong. See the comment for more details:
28d815b894/activerecord/lib/active_record/connection_adapters/abstract/quoting.rb (L33-L42)
This also deprecates passing legacy binds (an array of `[column, value]`
which is 4.2 style format) to query methods on connection. That legacy
format was kept for backward compatibility, instead of that, I've
supported casted binds format (an array of casted values), it is easier
to construct binds than existing two binds format.
I've found the internal `without_transaction_enrollment` callbacks which
have not been newly used over five years, when I tried to work reverting
#9068 (https://github.com/rails/rails/pull/36049#issuecomment-487318060).
I think that we will never make that callbacks public, since the
mechanism of `without_transaction_enrollment` is too implementation
specific, at least before #9068, records in a transaction had enrolled
all into the transaction.
That callbacks was introduced at #18936 to make `touch_later` #19324,
but I think that the internal callbacks is overkill to just make the
`touch_later` only, and invoking the extra callbacks also have a little
overhead even if we haven't used that.
So I think we can remove the internal callbacks for now, until we will
come up with a good use for that callbacks.
`column_types` is empty except PostgreSQL adapter, and
`attribute_types.each_key { |k| column_types.delete k }` is also empty
even if PostgreSQL adapter almost all case, so that code is quite
useless. This improves performance for `find_by_sql` to avoid that
useless loop as much as possible.
```ruby
ActiveRecord::Schema.define do
create_table :active_storage_blobs do |t|
t.string :key, null: false
t.string :filename, null: false
t.string :content_type
t.text :metadata
t.string :service_name, null: false
t.bigint :byte_size, null: false
t.string :checksum, null: false
t.datetime :created_at, null: false
t.index [ :key ], unique: true
end
end
class ActiveStorageBlob < ActiveRecord::Base
end
Benchmark.ips do |x|
x.report("find_by") { ActiveStorageBlob.find_by(id: 1) }
end
```
Before:
```
Warming up --------------------------------------
find_by 1.256k i/100ms
Calculating -------------------------------------
find_by 12.595k (± 3.4%) i/s - 64.056k in 5.091599s
```
After:
```
Warming up --------------------------------------
find_by 1.341k i/100ms
Calculating -------------------------------------
find_by 13.170k (± 3.5%) i/s - 67.050k in 5.097439s
```
To avoid column types loop for PostgreSQL adapter, this skips returning
additional column types if a column has already been type casted by pg
decoders. Fortunately this fixes#36186 partly for common types.
Relying on the `Arel::Table.engine` is convenient if an app have only a
single kind of database, but if not so, the global state is not always
the same with the current connection.
`allowed_index_name_length` was used for internal temporary operations
in SQLite3, since index name in SQLite3 must be globally unique and
SQLite3 doesn't have ALTER TABLE feature (so it is emulated by creating
temporary table with prefix).
`allowed_index_name_length` was to reserve the margin for the prefix,
but actually SQLite3 doesn't have a limitation for identifier name
length, so the margin has removed at 36901e6.
Now `allowed_index_name_length` is no longer relied on by any adapter,
so I'd like to remove the internal specific method which is no longer
used.
This is a smaller alternative of performance improvement, without
refactoring type casting mechanism #39009.
This is relatively a smaller change (but about 40% faster than before),
so I think this could be easier reviewed without discuss about
refactoring type casting mechanism.
This just makes `attribute.in(values)` less allocation from an array of
casted nodes to one casted array node.
```ruby
ids = (1..1000).each.map do |n|
Post.create!.id
end
Benchmark.ips do |x|
x.report("where with ids") do
Post.where(id: ids).to_a
end
x.report("where with sanitize") do
Post.where(ActiveRecord::Base.sanitize_sql(["id IN (?)", ids])).to_a
end
x.compare!
end
```
Before:
```
Warming up --------------------------------------
where with ids 7.000 i/100ms
where with sanitize 13.000 i/100ms
Calculating -------------------------------------
where with ids 70.661 (± 5.7%) i/s - 357.000 in 5.072771s
where with sanitize 130.993 (± 7.6%) i/s - 663.000 in 5.096085s
Comparison:
where with sanitize: 131.0 i/s
where with ids: 70.7 i/s - 1.85x slower
```
After:
```
Warming up --------------------------------------
where with ids 10.000 i/100ms
where with sanitize 13.000 i/100ms
Calculating -------------------------------------
where with ids 98.174 (± 7.1%) i/s - 490.000 in 5.012851s
where with sanitize 132.289 (± 8.3%) i/s - 663.000 in 5.052728s
Comparison:
where with sanitize: 132.3 i/s
where with ids: 98.2 i/s - 1.35x slower
```
`in_clause_length` was added at c5a284f to address to Oracle IN clause
length limitation.
Now `in_clause_length` is entirely integrated in Arel visitor since
#35838 and #36074.
Since Oracle visitors are the only code that rely on `in_clause_length`.
so I'd like to remove that from Rails code base, like has removed Oracle
visitors (#38946).
Before this, 1000 `Or` nodes will raise "stack level too deep" due to
visiting too deep Arel ast.
This makes more concise Arel `Or` ast and `Or` visitor non recursive if
`Or` nodes are adjoined, as a result, "stack level too deep" is no
longer raised.
```ruby
class Post < ActiveRecord::Base
end
posts = (0..500).map { |i| Post.where(id: i) }
Benchmark.ips do |x|
x.report("inject scopes") { posts.inject(&:or).to_sql }
end
```
Before:
```
Warming up --------------------------------------
where with ids 9.000 i/100ms
Calculating -------------------------------------
where with ids 96.126 (± 2.1%) i/s - 486.000 in 5.058960s
```
After:
```
Warming up --------------------------------------
inject scopes 10.000 i/100ms
Calculating -------------------------------------
inject scopes 101.714 (± 2.9%) i/s - 510.000 in 5.018880s
```
Fixes#39032.
If only one Arel node exist, wrapping a node by `And` node is obviously
redundant, make concise Arel ast will improve performance for visiting
the ast (about 20% faster for complex ast case).
```ruby
class Post < ActiveRecord::Base
end
posts = (0..500).map { |i| Post.where(id: i) }
Benchmark.ips do |x|
x.report("inject scopes") { posts.inject { |res, scope| res.or(scope) }.to_sql }
end
```
Before:
```
Warming up --------------------------------------
where with ids 8.000 i/100ms
Calculating -------------------------------------
where with ids 80.416 (± 2.5%) i/s - 408.000 in 5.078118s
```
After:
```
Warming up --------------------------------------
where with ids 9.000 i/100ms
Calculating -------------------------------------
where with ids 96.126 (± 2.1%) i/s - 486.000 in 5.058960s
```
Currently, `count` and `average` always returns numeric value, but
`sum`, `maximum`, and `minimum` not always return numeric value if
aggregated on custom attribute type.
I think that inconsistent behavior is surprising:
```ruby
# All adapters except postgresql adapter are affected
# by custom type casting.
Book.group(:status).sum(:status)
# => { "proposed" => "proposed", "published" => nil }
```
That is caused by fallback looking up cast type to `type_for(column)`.
Now all supported adapters can return numeric value without that
fallback, so I think we can remove that, it will also fix aggregate
functions to return numeric value consistently.
While not a particularly good idea, it's possible to use `object_id` as
an attribute name, typically by defining a polymorphic association named
`object`. Since 718a32ca745672a977a0d4ae401f61f439767405, transactional
callbacks deduplicate records by their `object_id`, but this causes
incorrect behaviour when the record has an attribute with that name.
Using `__id__` instead makes a naming collision much less likely.
This PR allows for passing `if_exists` options to the `remove_index`
method so that we can ignore already removed indexes. This work follows
column `if/if_not_exists` from #38352 and `:if_not_exists` on `add_index`
from #38555.
We've found this useful at GitHub, there are migrations where we don't
want to raise if an index was already removed. This will allow us to
remove a monkey patch on `remove_index`.
I considered raising after the `index_name_for_remove` method is called
but that method will raise if the index doesn't exist before we get to
execute. I have a commit that refactors this but after much
consideration this change is cleaner and more straightforward than other
ways of implementing this.
This change also adds a little extra validation to the `add_index` test.
Fix `nodoc` on edited methods.
This removes ibm_db, informix, mssql, oracle, and oracle12 Arel visitors
which are not used in the code base.
Actually oracle and oracle12 visitors are used at oracle-enhanced
adapter, but now I think that those visitors should be in the adapter's
repo like sqlserver adapter and the dedicated Arel visitor
(https://github.com/rails-sqlserver/activerecord-sqlserver-adapter/blob/master/lib/arel/visitors/sqlserver.rb),
otherwise it is hard to find a bug and review PRs for the oracle
visitors (e.g. #35838, #37646), since we don't have knowledge and
environment enough for Oracle.
Previously, if `build_association` was called multiple times for a `has_one` association but never committed to the database, the first newly-associated record would trigger `touch` during the attempted removal of the record.
For example:
class Post < ActiveRecord::Base
has_one :comment, inverse_of: :post, dependent: :destroy
end
class Comment < ActiveRecord::Base
belongs_to :post, inverse_of: :comment, touch: true
end
post = Post.create!
comment_1 = post.build_comment
comment_2 = post.build_comment
When `comment_2` is initialized, the `has_one` would attempt to destroy `comment_1`, triggering a `touch` on `post` from an association record that hasn't been committed to the database.
This removes the attempt to delete an associated `has_one` unless it’s persisted.
Related #15137.
Firebird related code is already removed in #15137.
We have two `current_adapter?(:DB2Adapter)` in tests, but the adapter is
no longer maintained (last release is November 15, 2012).
https://rubygems.org/gems/db2
Yet another (latest) DB2 adapter (`IBM_DBAdapter`) might support Rails
5.0.7, but apparently do not work for Rails 5.2.
https://rubygems.org/gems/ibm_db
We have few lines mention about DB2 in the doc, but now there is no
worth for almost all current users.
#37523 has a regression that ignore extra scoping in callbacks when
create on association relation.
It should respect `klass.current_scope` even when create on association
relation to allow extra scoping in callbacks.
Fixes#38741.
This module was added in 16ae3db5a5c6a08383b974ae6c96faac5b4a3c81 to
allow `ActiveRecord::AttributeMethods::Dirty` to define callbacks and
still have its `_update_record` method wrapped by the version defined in
`ActiveRecord::Callbacks`, so that updates in `before_update` callbacks
are taken into account for partial writes.
The callbacks that created this circular dependency were removed in
34f075fe5666dcf924606f8af2537b83b7b5139f, so we can move the callback
definitions back to the `Callbacks` module.
* Fix EagerLoadPolyAssocsTest setup
* EagerLoadPolyAssocsTest includes a Remembered module in multiple test
ActiveRecord classes. The module is supposed to keep track of records
created for each of the included classes individually, but it collects all
records for every class. This happens because @@remembered is defined on the
Remembered module and shared between the ActiveRecord classes. This only
becomes an issue for databases (like CockroachDB) that use random primary
keys instead of sequential ones by default.
* To fix the bug, we can make the remembered collection name unique per
ActiveRecord class.
* Update EagerLoadPolyAssocsTest test setup
* Instead of defining remembered as a class variable, we can define it as an
instance variable that will be unique to every class that includes the
Remembered module.
When we try to create a table which already exists which also adds
indexes, then the `if_not_exists` option passed to `create_table` is
not extended to indexes. So the migration results into an error if the
table and indexes already exist.
This change extends the `if_not_exists` support to `add_index` so that
if the migration tries to create a table which also has existing
indexes, error won't be raised.
Also as a side-effect individual `add_index` calls will also accept
`if_not_exists` option and respect it henceforth.
[Prathamesh Sonpatki, Justin George]
This reverts commit f265e0ddb1139a91635b7905aae1be76b22c6db1, reversing
changes made to 08dfa9212df4a6bf332a4c49b7e8a7d876a69331.
Reverted due to surprising behavior for applications. We need to
deprecate this behavior first instead of raising by default.
fix insert_all enum test
fix rubocop, change test to double quotes
Update activerecord/test/cases/insert_all_test.rb
fix inser_all_enum_values test: double quotes and order relation before pluck
Co-Authored-By: Ryuta Kamizono <kamipo@gmail.com>
change insert_all_enum_values test to not skip duplicates so it works across adapters
If a transaction is wrapped in a Timeout.timeout(duration) block, then the
transaction will be committed when the transaction block is exited from the
timeout, since it uses `throw`. Ruby code doesn't have a way to distinguish
between a block being exited from a `return`, `break` or `throw`, so
fixing this problem for the case of `throw` would require a backwards
incompatible change for block exited with `return` or `break`. As such,
the current behaviour so it can be changed in the future.
This behaviour is in
rails/activerecord/lib/active_record/enum.rb #serialize(value) line no 143
if value is not present in mapping we are sending the value back ,
which in mysql returns unrelated record.
I have changed to return nil is value is not present in mapping
Implemented code review changes
Improved test case coverage
[ci skip] - cosmetic changes for better readibility of change log
Signed-off-by: ak <atulkanswal@gmail.com>
When the `colorize_logging` is disabled,
logs do not colorize the SQL queries.
But the `sql_color` method is always
invoked which due to regex matching results
in slow queries.
This PR fixes#38685 and removes
unnecessary invokation of `sql_color`
method when `colorize_logging` is disabled
This method was jumping through extra hoops to find the name of the
class the connection is stored on when we can get it from the connection
itself. Since we already have the connection we don't need to loop through the
pools.
In addition, this was using the wrong class name. The class name for the
schema migration should come from the connection owner class, not from
the `db_config.name`. In this case, `db_config.name` is the name of the
configuration in the database.yml. Rails uses the class name to lookup
connections, not the db config name, so we should be consistent here.
While working on this I noticed that we were generating an extra schema
migration class for `ActiveRecord::Base`. Since `ActiveRecord::Base` can
and should use the default and we don't want to create a new one for
single db applications, we should skip creating this if the spec name is
`ActiveRecord::Base`. I added an additional test that ensures the class
generation is correct.
We don't actually need this since the only reason it exists is to pass
the owning class name down to the `handler`. This removes a level of
indirection and an unnecessary accessor on db_config. db_config
shouldn't have to know what class owns it, so we can just remove this
and pass it to the handler.
The Symbol case is needed to preserve current behavior. This doesn't
need a changelog because it's changing un-released behavior.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
Instead of doing a case statement here we can have each of the objects
respond to `invert`. This means that when adding new objects we don't
need to increase this case statement, it's more object oriented, and
let's be fair, it looks better too.
Aaron and I stumbled upon this while working on some performance
work in Arel.
I removed `random_object` from the invert test because we don't support
random objects. If you pass a random object to Arel, it should raise,
not be inverted.
Co-authored-by: Aaron Patterson <aaron.patterson@gmail.com>
After running `bundle exec rake test:sqlite3` and `bundle exec rake test:sqlite3_mem`
on my VM I noticed that it had created untracked files:
```bash
vagrant@ubuntu-bionic:/rails/activerecord$ git status
Untracked files:
(use "git add <file>..." to include in what will be committed)
db/
file::memory:
```
To prevent them from being accidentally committed I put 'file::memory:' to
`activerecord/db/` folder and added the folder to .gitignore
Also, we could consider fixing this by removing `db/` folder in each test that
creates the folder.
It would be great if someone confirms that it happens not only on my VM.
Current code expect an `eq` node has one arel attribute at least, but an
`eq` node may have no arel attribute (e.g. `Arel.sql("...").eq(...)`).
In that case `unscope` will raise `NoMethodError`:
```
% bundle exec ruby -w -Itest test/cases/relations_test.rb -n test_unscope_with_arel_sql
Using sqlite3
Run options: -n test_unscope_with_arel_sql --seed 4477
# Running:
E
Error:
RelationTest#test_unscope_with_arel_sql:
NoMethodError: undefined method `name' for #<Arel::Nodes::Quoted:0x00007f9938e55960>
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:157:in `block (2 levels) in except_predicates'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/arel.rb:57:in `fetch_attribute'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:157:in `block in except_predicates'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:156:in `reject'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:156:in `except_predicates'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:31:in `except'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:487:in `block (2 levels) in unscope!'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:481:in `each'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:481:in `block in unscope!'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:471:in `each'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:471:in `unscope!'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:464:in `unscope'
test/cases/relations_test.rb:2062:in `test_unscope_with_arel_sql'
```
We should check for both `value.left` and `value.right` those are arel
attribute or not.
Behavior has not changed here but the previous API could be
misleading to people who thought it would switch connections for only
that class. `connected_to` switches the context from which we are
getting connections, not the connections themselves.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
This adds gzip support for both the YAML and the Marshal serialization
strategies.
Particularly large schema caches can become a problem when deploying to
Kubernetes, as there is currently a 1*1024*1024 byte limit for the
ConfigMap. For large databases, the schema cache can exceed this limit.
Applications can now connect to multiple shards and switch between
their shards in an application. Note that the shard swapping is
still a manual process as this change does not include an API for
automatic shard swapping.
Usage:
Given the following configuration:
```yaml
production:
primary:
database: my_database
primary_shard_one:
database: my_database_shard_one
```
Connect to multiple shards:
```ruby
class ApplicationRecord < ActiveRecord::Base
self.abstract_class = true
connects_to shards: {
default: { writing: :primary },
shard_one: { writing: :primary_shard_one }
}
```
Swap between shards in your controller / model code:
```ruby
ActiveRecord::Base.connected_to(shard: :shard_one) do
# Read from shard one
end
```
The horizontal sharding API also supports read replicas. See
guides for more details.
This PR also moves some no-doc'd methods into the private namespace as
they were unnecessarily public. We've updated some error messages and
documentation.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
I have so. many. regrets. about using `spec_name` for database
configurations and now I'm finally putting this mistake to an end.
Back when I started multi-db work I assumed that eventually
`connection_specification_name` (sometimes called `spec_name`) and
`spec_name` for configurations would one day be the same thing. After
2 years I no longer believe they will ever be the same thing.
This PR deprecates `spec_name` on database configurations in favor of
`name`. It's the same behavior, just a better name, or at least a
less confusing name.
`connection_specification_name` refers to the parent class name (ie
ActiveRecord::Base, AnimalsBase, etc) that holds the connection for it's
models. In some places like ConnectionHandler it shortens this to
`spec_name`, hence the major confusion.
Recently I've been working with some new folks on database stuff and
connection management and realize how confusing it was to explain that
`db_config.spec_name` was not `spec_name` and
`connection_specification_name`. Worse than that one is a symbole while
the other is a class name. This was made even more complicated by the
fact that `ActiveRecord::Base` used `primary` as the
`connection_specification_name` until #38190.
After spending 2 years with connection management I don't believe that
we can ever use the symbols from the database configs as a way to
connect the database without the class name being _somewhere_ because
a db_config does not know who it's owner class is until it's been
connected and a model has no idea what db_config belongs to it until
it's connected. The model is the only way to tie a primary/writer config
to a replica/reader config. This could change in the future but I don't
see value in adding a class name to the db_configs before connection or
telling a model what config belongs to it before connection. That would
probably break a lot of application assumptions. If we do ever end up in
that world, we can use name, because tbh `spec_name` and
`connection_specification_name` were always confusing to me.
Followup to 88fe76e69328d38942130e16fb65f4aa1b5d1a6b.
These are new in RuboCop 0.80.0, and enforce a style we already prefer
for performance reasons (see df81f2e5f5df46c9c1db27530bbd301b6e23c4a7).
Add `#strict_loading` to any record to prevent lazy loading of associations.
`strict_loading` will cascade down from the parent record to all the
associations to help you catch any places where you may want to use
`preload` instead of lazy loading. This is useful for preventing N+1's.
Co-authored-by: Aaron Patterson <aaron.patterson@gmail.com>
Before df186bd16f0d4a798e626297277fc6b490c1419e, `assign_attributes` and
`attributes=` were both defined in Active Model and both made a copy of
their argument. Now `assign_attributes` is overridden in Active Record
and the copy happens there instead, but `attributes=` isn't overridden.
This meant that assigning nested or multi-parameters via `attributes=`
would mutate the argument, which the copying was meant to prevent.
This issue is caused due to association queries uses newly created fresh
onetime predicate builder, it doesn't realize registered predicate
handler. To fix the issue, dup the predicate builder for the klass
instead of newly creating.
Fixes#38239.
Rails has a number of places where a YAML configuration file is read,
then ERB is evaluated and finally the YAML is parsed.
This consolidates that into one common class.
Co-authored-by: Kasper Timm Hansen <kaspth@gmail.com>
- When a user passes `updated_at` to `upsert_all`, the given value is used.
- When a user omits `updated_at`, `upsert_all` touches the timestamp if (but only if) any upserted values differ.
Preserve Rails' ability to generate intelligent cache keys for ActiveRecord when using `upsert_all` frequently to sync imported data.
indexes in a table.
Currently the pg_class catalog is filtered out to retrieve the indexes in a
table by its relkind value. Which in versions lower than 11 of PostgreSQL
is always `i` (lower case). But since version 11, PostgreSQL
supports partitioned indexes referenced with a relkind value of `I`
(upper case). This makes any feature within the current code base to exclude those
partitioned indexes.
The solution proposed is to make use of the `IN` clause to filter those
relkind values of `i` and/or `I` when retrieving a table indexes.
This PR adds support for `if_exists` on `remove_column` and
`if_not_exists` on `add_column` to support silently ignoring migrations
if the remove tries to remove a non-existent column or an add tries to
add an already existing column.
We (GitHub) have custom monkey-patched support for these features and
would like to upstream this behavior.
This matches the same behavior that is supported for `create_table` and
`drop_table`. The behavior for sqlite is different from mysql/postgres
and sqlite for remove column and that is reflected in the tests.
Fixes#38332
If the `connected_to` block returns a relation and does not inspect, or
load that relation before returning it, the block will exit before the
database is queried. This causes the wrong database connection to be
queried.
The consequences of this are getting records from the primary instead of
the replica, and potentially having database performance impact.
Relations lazily query the database. If you return the relation from the
block like:
```
posts = ActiveRecord::Base.connected_to(role: :reading) { Post.where(id: 1) }
```
`posts.first` will be queried from the `writing` connection instead
because it's lazy and performed outside the block. Any query that loads
the relation (ie `to_a`) inside the block would eagerly load the
relation's records and not exhibit this bug.
`connected_to` now checks if the return value is a `Relation` and if so
calls `load`.
Co-authored-by: Aaron Patterson <aaron.patterson@gmail.com>
`default_scoped` is an only way to enforce returning a scope with
default scopes in a scoping, and it is needed for migration to avoid
leaking scope (#35280, #37727).
Closes#38241.
This PR moves advisory lock to it's own connection instead of
`ActiveRecord::Base` to fix#37748. As a note the issue is present on
both mysql and postgres. We don't see it on sqlite3 because sqlite3
doesn't support advisory locks.
The underlying problem only appears if:
1) the app is using multiple databases, and therefore establishing a new
connetion in the abstract models
2) the app has a migration that loads a model (ex `Post.update_all`)
which causes that new connection to get established.
This is because when Rails runs migrations the default connections are
established, the lock is taken out on the `ActiveRecord::Base`
connection. When the migration that calls a model is loaded, a new
connection will be established and the lock will automatically be
released.
When Rails goes to release the lock in the ensure block it will find
that the connection has been closed. Even if the connection wasn't
closed the lock would no longer exist on that connection.
We originally considered checking if the connection was active, but
ultimately that would hide that the advisory locks weren't working
correctly because there'd be no lock to release.
We also considered making the lock more granular - that it only blocked
on each migration individually instead of all the migrations for that
connection. This might be the right move going forward, but right now
multi-db migrations that load models are very broken in Rails 6.0 and
master.
John and I don't love this fix, it requires a bit too much knowledge of
internals and how Rails picks up connections. However, it does fix the
issue, makes the lock more global, and makes the lock more resilient to
changing connections.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
This updates the database tasks for dumping the Active Record schema cache as
well as clearing the schema cache file, allowing the path to be defined in the
database configuration YAML file.
As before, the value can also be defined in an ENV variable, though this would
not work for a multi-db application. If the value is specified neither in the
DB config, nor in the ENV, then the path will continue to be derived from the
DB config spec_name.
Note that in order to make this change cleaner I also moved a bit of logic
out of a rake task and into the DatabaseTasks class, for symmetry.
We have two rake tasks for the schema cache:
$ rake db:schema:cache:dump
$ rake db:schema:cache:clear
The cache:dump task was implemented in DatabaseTasks, but the
cache:clear one was not.
I also added some tests for the behavior that I was changing, since some of
the code paths weren't tested.
Calling `#remove_connection` on the handler is deprecated in favor of
`#remove_connection_pool`. This change was made to support changing the
return value from a hash to a `DatabaseConfig` object.
`#remove_connection` will be removed in 6.2.
NOTE: `#remove_connection` on `ActiveRecord::Base` will also now return
a `DatabaseConfig` object. We didn't use a deprecation here since
it's not documented that this method used to return a `Hash`.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
Database configurations are now objects almost everywhere, so we don't
need to fake access to a hash with `#default_hash` or it's alias `#[]`.
Applications should `configs_for` and pass `env_name` and `spec_name` to
get the database config object. If you're looking for the default for
the test environment you can pass `configs_for(env_name: "test", spec_name:
"primary")`. Change test to developement to get the dev config, etc.
`#default_hash` and `#[]` will be removed in 6.2.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
Currently the database selector middleware only passes the request to
the context.
This is enough for the resolver to decide whether to switch
to the primary, but for a custom resolver and a custom context class,
it's not enough to persist any information for subsequent requests. For
example, say you want to use a cookie to decide whether when to switch.
It's not possible to set the response cookie from this middleware, since
neither the context nor the resolver have access to it.
This includes an extra step to update the context after the response has
been computed.
- When a column with a precision that is higher than what the system
allows, it would result in an error:
```sh
require "bigdecimal/util"
123.4.to_d(20) => ArgumentError, precision is too high
```
To fix that problem we need to check what the max number of digits
a Float is allowed to have, we can achieve that with `BigDecimal.double_fig`
Fix#38209
- ### Problem
It's no longer possible to define a numeric validation on abstract
class:
```ruby
class AnimalsBase < ApplicationRecord
self.abstract_class = true
validates :age, numericality: { min: 18 }
end
class Dog < AnimalsBase
end
Dog.create!(age: 0) => ActiveRecord::TableNotSpecified: Dog has no table configured. Set one with Dog.table_name=
```
### Solution
Instead of trying to get the type for attribute on the class
defining the validation, get it from the record being validated.
Since test fixtures share connections (due to transactional tests) we
end up overwriting the reading configuration so Rails doesn't recognize
it as a replica connection.
This change ensures that if we're using the `reading` role that
connections will always have prevent writes turned on.
If you need a replica connection that does not block writes, you should
use a different role name other than `:reading`.
The db selector test and connection handlers test have been updated to
test for these changes. In the db selector test we don't always have a
writing handler so I updated test fixtures to return if that's nil.
Lastly one test needed to be updated to use a different handler name due
to it needing to write to successfully test what it needs to test.
Fixes#37765
This test wasn't correct. If we're calling `resolver.read` and want to
actually read from the replicas then the role would be reading not
writing. This was because the session store needed to be changed so that
we actually are "reading from the replicas" instead of reading from the
primary.
As multiple databases have evolved it's becoming more and more
confusing that we have a `connection_specification_name` that defaults
to "primary" and a `spec_name` on the database objects that defaults to
"primary" (my bad).
Even more confusing is that we use the class name for all
non-ActiveRecord::Base abstract classes that establish connections. For
example connections established on `class MyOtherDatabaseModel <
ApplicationRecord` will use `"MyOtherDatabaseModel"` as it's connection
specification name while `ActiveRecord::Base` uses `"primary"`.
This PR deprecates the use of the name `"primary"` as the
`connection_specification_name` for `ActiveRecord::Base` in favor of
using `"ActiveRecord::Base"`.
In this PR the following is true:
* If `handler.establish_connection(:primary)` is called, `"primary"`
will not throw a deprecation warning and can still be used for the
`connection_specification_name`. This also fixes a bug where using this
method to establish a connection could accidentally overwrite the actual
`ActiveRecord::Base` connection IF that connection was not using a
configuration named `:primary`.
* Calling `handler.retrieve_connection "primary"` when
`handler.establish_connection :primary` has never been called will
return the connection for `ActiveRecord::Base` and throw a deprecation
warning.
* Calling `handler.remove_connection "primary"` when
`handler.establish_connection :primary` has never been called will
remove the connection for `ActiveRecord::Base` and throw a deprecation
warning.
See #38179 for details on more motivations for this change.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
This commit is somewhat of a bandaid fix for a bug that was revealed in #38029
and then #38151. #38151 can cause problems in certain cases when an app has
a 3-tier config, with replicas, because it reorders the configuration
and changes the implict default connection that gets picked up.
If an app calls `establish_connection` with no arguments or doesn't call
`connects_to` in `ApplicationRecord` AND uses parallel testing
databases, the application may pick up the wrong configuration.
This is because when the code in #38151 loops through the configurations
it will update the non-replica configurations and then put them at the
end of the list. If you don't specify which connection you want, Rails
will pick up the _first_ connection for that environment. So given the
following configuration:
```
test:
primary:
database: my_db
replica:
database: my_db
replica: true
```
The database configurations will get reordered to be `replica`,
`primary` and when Rails calls `establish_connection` with no arguments
it will pick up `replica` because it's first in the list.
Looking at this bug it shows that calling `establish_connection` with no
arguments (which will pick up the default env + first configuration in
the list) OR when `establish_connection` is called with an environment
like `:test` it will also pick up that env's first configuration. This
can have surprising behavior in a couple cases:
1) In the parallel testing changes we saw users getting the wrong db
configuration and hitting an `ActiveRecord::ReadOnlyError`
2) Writing a configuration that puts `replica` before `primary`, also
resulting in a `ActiveRecord::ReadOnlyError`
The real fix for this issue is to deprecate calling
`establish_connection` with an env or nothing and require an explcit
configuration (like `primary`). This would also involve blessing
`:primary` as the default connection Rails looks for on boot. In
addition, this would require us deprecating connection specification
name "primary" in favor of the class name always since that will get
mega-confusing (seriously, it's already mega-confusing).
We'll work on fixing these underlying issues, but wanted to get a fix
out that restores previous behavior.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
The Preloader relies on other objects to bind the retrieved records to their
parents. When executed across a hash, it assumes that the results of
`preloaded_records` is the appropriate set of records to pass in to the next
layer.
Filtering based on the reflection properties in `preloaded_records` allows us to
avoid excessive preloading in the instance where we are loading across a
`has_one` association distinguished by an order (e.g. "last comment" or
similar), by dropping these records before they are returned to the
Preloader. In this situation, we avoid potentially very long key lists in
generated queries and the consequential AR object instantiations.
This is mostly relevant if the underlying linked set has relatively many
records, because this is effectively a multiplier on the number of records
returned on the far side of the preload. Unfortunately, avoiding the
over-retrieval of the `has_one` association seems to require substantial changes
to the preloader design, and probably adaptor-specific logic -- it is a
top-by-group problem.
If we defined a callback before an association that updates the object, then this may end up being
manipulated to being `false` when it should be `true`. We guard this be only defining it once.
The implication of it being false, in this case, is that the children aren't updated with the parent_id
and so they fail to associate to one another.
See https://github.com/rails/rails/issues/38120 for more details
The ArgumentError occurs even though structures is compatible.
Because some query methods keep duplicate values.
For example, the behavior of `joins` method is as following:
```ruby
relation = Post.joins(:author).joins(:author)
relation.joins_values
#=> [:author, :author]
relation.or(Post.joins(:author))
#=> ArgumentError: Relation passed to #or must be structurally compatible. Incompatible values: [:joins]
```
This commit changes to not keep duplicate values.
Fixes#38052
In 154abca we switched from using `Rails.env` to fetch the `env_name` to
`ActiveRecord::ConnectionHandling::DEFAULT_ENV.call.to_sym` which
changed the type from a `String` to a `Symbol`.
This commit brings things back to the original state, so we can find the
configurations correctly!
It also modifies the configuration in the configurations array, so that
future connections can find the database with the updated keyword value.
Add ActiveRecord::Relation#cache_key_with_version. This method will be
used by ActionController::ConditionalGet to ensure that when collection
cache versioning is enabled, requests using ConditionalGet don't return
the same ETag header after a collection is modified.
Prior to the introduction of collection cache versioning in
4f2ac80d4cdb01c4d3c1765637bed76cc91c1e35, all collection cache keys
included a version. However, with cache versioning enabled, collection
cache keys remain constant. In turn, ETag headers remain constant,
rendering them ineffective.
This commit takes the cache_key_with_version method used for individual
Active Record objects (from aa8749eb52d7919a438940c9218cad98d892f9ad),
and adds it to collections.
When using PostgreSQL, it's useful to be able to specify NULLS FIRST and NULLS LAST on ordered columns. With this commit you can do that now, as in:
```ruby
User.arel_table[:first_name].desc.nulls_last
```
The `_trigger_update_callback` and `_trigger_destroy_callback`
attributes were added in 9252da96597fbffe2246704556524c4804239552 to
avoid running transactional callbacks when an attempt to modify a record
fails inside a transaction due to the record being invalid, for example.
However the values weren't being reset between transactions, which meant
they leaked from one transaction to another and caused false positives
where unsuccessful modifications still triggered callbacks. Clearing
them when a transaction commits or is rolled back fixes the problem.
With collection_cache_versioning enabled, a collection's volatile info
(size & max updated_at timestamp) is included in
ActiveRecord::Relation#cache_version, not #cache_key.
Avoid the SQL query to used determine this volatile info when generating
an un-versioned cache key. This query does not need to be executed
unless cache_version is called separately.
If `AR::Enum` is used for boolean field, it would be not expected
behavior for us.
fixes#38075
Problem:
In case of using boolean for enum, we can set with string (hash key)
to instance, but we cannot set with actual value (hash value).
```ruby
class Post < ActiveRecord::Base
enum status: { enabled: true, disabled: false }
end
post.status = 'enabled'
post.status # 'enabled'
post.status = true
post.status # 'enabled'
post.status = 'disabled'
post.status # 'disabled'
post.status = false
post.status # nil (This is not expected behavior)
```
After looking into `AR::Enum::EnumType#cast`, I found that `blank?`
method converts from false value to nil (it seems it may not intentional behavior).
In this patch, I improved that if it defines enum with boolean,
it returns reasonable behavior.
The regular expression did not match CREATE TABLE statements printed out by AWS Aurora MySQL 5.6 instances, because they lack the required space at that position.
This adds `:declare`, `:fetch`, `:move`, and `:close` to allowed queries
when `while_preventing_writes` is set. I didn't support `:open` because
AFAICT `:declar` implcitly opens a cursor and all of my attempts to
write `@connection.execute("OPEN cur_ex")` threw a syntax error on
`OPEN`. It seems like open isn't supported with the client.
Fixes: #37960
We want to introduce an object-based DSL for building and modifying
configuration objects. As part of that we want to make sure that users
don't think they can modify configuration_hash values and have them
change the configuration. For that reason we're going to freeze the
Hash here, and have modified places in tests where we were modifying
these hashes.
The commit here also adds a test for the Test Databases and in that work
we found that we were calling `Rails.env` and Active Record doesn't load
Rails.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
These tests weren't calling assert, so if the execute didn't raise but
also didn't return anything it would be a broken test that never fails.
We need to always add an assertion so we know what the expected behavior
is.
Related #36456.
I grepped the code base by `git grep -n 'connection_id: '` then I found
extra `connection_id: object_id` which is added at #20818 but unused.
Actually the `connection_id: object_id` is not a connection's object_id
but a connection_handler's object_id, it is very confusing.
Since the `:connection_id` in an internal instrument is not used, we can
just remove the incorrect information.
`name` is used by Rails to find the configuration by connection
specification name, but database adapters don't need to use `name` in
order to establish a connection. This is part of our work to separate
what the database needs to connect (the configuration hash) and the
what Rails needs to find connections (everything else).
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
- I'm hoping to get this change accepted even though this flag was
introduced years ago in 6576f7354e50afb79881aaf3a6f50f4e81dfab70
My use case is the following:
We were never enforcing belongs to association and we have a lot
of models that implicitely declare an association as optional.
We are now changing all our models to make associations required
by default.
Since we have a lot of models (more than 1000), I'd like to
progressively enable them to use the `belongs_to_required_by_default`
flag.
The problem is that this flag is a mattr_accessor and doesn't to be
set per model. We basically need to modify all our models (which
could take years) before being able to modify the global flag.
I'd like to change this flag to a class_attribute to solve the
issue.
This method in test has a confusing name that also is the same as a
method with a different behavior on `ActiveRecord::Base` so we're just
renaming it to avoid confusion between the two!
Co-authored-by: eileencodes <eileencodes@gmail.com>
The `connection_config` method returns a `Hash`, but since we're moving
toward a place where we're using `DatabaseConfiguration::DatabaseConfig`
objects everywhere, we're introducing a new method here to replace it
called `connection_db_config`.
Co-authored-by: eileencodes <eileencodes@gmail.com>
- #37798 had to be reverted because a new flakyness appeared ([see
CI build](https://buildkite.com/rails/rails/builds/65467#efaa1dd5-aaf4-43a1-a204-d1c42abf614d))
This failure isn't related to the change itself.
If you apply this diff, the test will some time to time fail.
Even without my changes.
```diff
diff --git a/activerecord/test/cases/transactions_test.rb b/activerecord/test/cases/transactions_test.rb
index b5c1cac3d9..09fe0ddd7f 100644
--- a/activerecord/test/cases/transactions_test.rb
+++ b/activerecord/test/cases/transactions_test.rb
@@ -1127,7 +1127,6 @@ def test_no_automatic_savepoint_for_inner_transaction
end
end if Topic.connection.supports_savepoints?
-if ActiveRecord::Base.connection.supports_transaction_isolation?
class ConcurrentTransactionTest < TransactionTest
# This will cause transactions to overlap and fail unless they are performed on
# separate database connections.
@@ -1198,4 +1197,3 @@ def test_transaction_isolation__read_committed
assert_equal original_salary, Developer.find(1).salary
end
end
-end
```
SQlite3 isn't safe to run in a multi threaded environment unless
sqlite is compiled with a special flag which isn't our case.
Ref https://www.sqlite.org/threadsafe.html
This reverts commit b4ab1f19d8bcc4639c7379b560f000f01753b5b0, reversing
changes made to 9926fd86626690746c7ee78174c4d3577bdfa58d.
This is causing CI to fail about 50% of the time, and can be reproduced
locally 100% of the time on my machine.
Ex failing build https://buildkite.com/rails/rails/builds/65467#efaa1dd5-aaf4-43a1-a204-d1c42abf614d
Error:
```
SQLite3::BusyException: database is locked (ActiveRecord::StatementInvalid)
```
A querystring value should be allowed to include an equal sign `=`.
This is necessary to support passing `options` for a PostgresSQL connection.
```
development:
url: postgresql://localhost/railsdevapp_development?options=-cmysetting.debug=on
```
Before this PR, attempting to start the rails process with that configuration would result in an error:
```
> bundle exec rails console
Traceback (most recent call last):
49: from bin/rails:4:in `<main>'
48: from bin/rails:4:in `require'
...
1: from /rails/activerecord/lib/active_record/database_configurations/connection_url_resolver.rb:58:in `query_hash'
/rails/activerecord/lib/active_record/database_configurations/connection_url_resolver.rb:58:in `[]': invalid number of elements (3 for 1..2) (ArgumentError)
```
After this PR, rails can properly parse the configuration:
```
> bundle exec rails console
Loading development environment (Rails 6.1.0.alpha)
2.6.5 :001 > ActiveRecord::Base.connection.select_all("show mysetting.debug").to_a
(0.4ms) show mysetting.debug
=> [{"mysetting.debug"=>"on"}]
```
- SQlite allow database to be specified as URL (given that URI filename
interpretation was turned on on the connection.)
This commit is necessary for the read_uncommitted transaction
feature because SQLite doesn't allow to enable the shared-cache mode
if the database name is `:memory:`. It has to be a URI (`file::memory`)
Ref https://www.sqlite.org/sharedcache.html#shared_cache_and_in_memory_databases
- ### Use case
I'd like to be able to see changes made by a connection writer
within a connection reader before the writer transaction commits
(aka `read_uncommitted` transaction isolation level).
```ruby
conn1.transaction do
Dog.create(name: 'Fido')
conn2.transaction do
Dog.find(name: 'Fido') # -> Can't see the dog untill conn1 commits the transaction
end
end
```
Other adapters in Rails (mysql, postgres) already supports multiple
types of isolated db transaction.
SQLite doesn't support the 4 main ones but it supports
`read_uncommitted` and `serializable` (the default one when opening
a transaction)
### Solution
This PR allow developers to open a `read_uncommitted` transaction by
setting the PRAGMA `read_uncommitted` to true for the duration
of the transaction. That PRAGMA can only be enabled if the SQLite
connection was established with the [shared-cache mode](https://www.sqlite.org/sharedcache.html)
This feature can also benefit the framework and we could potentially
get rid of the `setup_shared_connection_pool` inside tests which
was a solution in the context of a multi-db app so that the reader
can see stuff from the open transaction writer but has some [caveats](https://github.com/rails/rails/issues/37765#event-2828609021).
### Edge case
Shared-cache mode can be enabled for in memory database as well,
however for backward compatibility reasons, SQLite only allows
to set the shared-cache mode if the database name is a URI.
It won't allow it if the database name is `:memory`; it has to be
changed to `file::memory` instead.
Applying `includes` and `joins` to a relation that selected additional
database fields would cause those additional fields not to be included
in the results even though they were queried from the database:
posts = Post.select('1 as other').includes(:tbl).joins(:tbl)
posts.to_sql.include?('1 as other') #=> true
posts.first.attributes.include?('other') #=> false
This commit includes these additionally selected fields in the
instantiated results.
The `database` kwarg in `connected_to` has resulted in a lot of bug
reports that are trying to use it for sharding when that's not the
intent of the key. After considering where the database kwarg is used in
tests and thinking about usecases for it, we've determined it should be
removed.
There are plans to add sharding support and in the mean time the
database kwarg isn't the right solution for that. Applications that need
to create new connections can use establish_connection or connects_to.
Since the database key causes new connections to be established on every
call, that causes bugs if connected_to with a database kwarg is used
during a request or for any connection that's not a one-off.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
If a spec name was provided without an env name, config_for would return
the first config that matched the spec, regardless of environment name.
Now configs_for will return the database config that matches the
default env and requested spec name.
Additionally this commit has moved the default env call into a method
because I'm tired of typing so many lines every single time.
We considered either returning all configs that match that spec name or
raising an error if only spec was passed, but this change has the least
impact on current behavior and matches Active Record's assumptions: that
if you ask for configs it will always consider the current environment.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
Testing for sqlite3 makes an extra db file `primary.sqlite3` as below.
```
$ bundle exec rake test:sqlite3
(snip)
$ git status
On branch test
Untracked files:
(use "git add <file>..." to include in what will be committed)
db/
```
- ### Problem
Connection pools are not properly shared. A replica can't see the
data in the open transaction on the writing connection.
```ruby
Dog.create!(name: 'bilou')
AnimalsBase.connected_to(role: :reading) do
Dog.find_by(name: 'bilou') # No result
end
```
The reason of this bug is because when test starts, we iterate
over `AR::Base.connection_handlers` which only contains the `writing`
handler since models haven't been loaded (app isn't eagerloaded
test env).
### Solution
Share connection pools as soon as a connection is established
by creating a subcriber.
- ### Problem
If one wants to use ActiveRecord::TestFixtures it is mandatory for
the test suite to inherit from `ActiveSupport::TestCase`.
TestFixtures makes use of specific method from AS::TestCase
(`file_fixture_path` and `method_name`).
### Solution
This PR fixes that by not making use of method_name and file_fixture_path.
There is an unused connection handler in a test and an extra
connection handler is made, so testing for sqlite3 makes an empty db
file as below.
```
$ bundle exec rake test:sqlite3
(snip)
$ git status
On branch test
Untracked files:
(use "git add <file>..." to include in what will be committed)
db/
```
Currently, when checking that the collection has been loaded, only the first
record is checked. In specific scenarios, if a record is fetched via an `after_initialize`
hook, there is a chance that the first record has been loaded, but other records in the
collection have not.
In order to successfully short circuit the fetching of data, we need to verify that
all of the records in the collection have been loaded.
* Create test for edge case
* Move `reset_callbacks` method to `cases/helper`, since it has been defined in multiple
locations.
Closes#37730
In a forking process web server like Unicorn, connections are
reconnected after fork. This test ensures that when connects are
reconnected after fork when using multiple databases, the query cache
will be on and work correctly.
This test fails on Rails 6.0 for now.
We have these two objects, `ConnectionAdapters::Resolver` and
`DatabaseConfiguratons` that implement a lot of the same logic. One of
them is used for configurations defined in `config/database.yml` and the
other is used when passing raw configurations `String` or `Hash` objects
into methods like `establish_connection`.
Over time these two have diverged a bit. In the interest of less code
complexity, and more consistency for users this commit brings them back
together.
* Remove `Resolver` altogether and replace its primary method with
`DatabaseConfigurations#resolve`.
* Move `resolve_pool_config` over to the `ConnectionPool` alongside the code
that uses it.
This commit renames `RoleManager` -> `PoolManager` and `Role` ->
`PoolConfig`.
Once we introduced the previous commit, and looking at the existing
code, it's clearer that `Role` and `RoleManager` are not the right names
for these.
Since this PR moves away from swapping the connection handler concepts
around and the role concept will continue existing on the handler level,
we need to rename this.
A `PoolConfig` holds a `connection_specification_name` (we may rename
this down the road), a `db_config`, a `schema_cache`, and a `pool`. It
does feel like `pool` could eventually hold all of these things instead
of having a `PoolConfig` object. This would remove one level of the
object graph and reduce complexity. For now I'm leaving this object to
keep the change churn low and will revisit later.
Co-authored-by: John Crepezzi <seejohnrun@github.com>
This PR is an alternate solution to #37388. While there are benefits
to merging #37388 it changes the public API and swaps around existing
concepts for how connection management works. The changes are
backwards-incompatible and pretty major. This will have a negative impact
on gems and applications relying on how conn management currently works.
**Background:**
Shopify and other applications need sharding but Rails has
made it impossible to do this because a handler can only hold one
connection pool per class. Sharded apps need to hold multiple
connections per handler per class.
This PR aims to solve only that problem.
**What this PR does:**
In this PR we've added a `RoleManager` class that can hold multiple
`Roles`. Each `Role` holds the `db_config`,
`connection_specification_name`, `schema_cache` and `pool`. By default
the `RoleManager` holds a single reference from a `default` key to the
`Role` instance. A sharded/multi-tenant app can pass an optional second
argument to `remove_connection`, `retrieve_connection_pool`,
`establish_connection` and `connected?` on the handler, thus allowing
for multiple connections belonging to the same class/handler without
breaking backwards compatibility.
By using the `RoleManager` we can avoid altering the public API, moving
around handler/role concepts, and achieve the internal needs for
establishing multiple connections per handler per class.
**A note about why we opened this PR:**
We very much appreciate the work that went into #37388 and in no way mean
to diminish that work. However, it breaks the following public APIs:
* `#retrieve_connection`, `#connected?`, and `#remove_connection` are
public methods on handler and can't be changed from taking a spec to a
role.
* The knowledge that the handler keys are symbols relating to a role
(`:writing`/`:reading`) is public - changing how handlers are accessed
will break apps/libraries.
In addition it doesn't solve the problem of mapping a single connection
to a single class since it has a 1:1 mapping of `class (handler) -> role
(writing) -> db_config`. Multiple pools in a writing role can't exist
in that implementation.
The new PR solves this by using the `RoleManager` to hold multiple connection
objects for the same class. This lets a handler hold a role manager
which can hold as many roles for that writer as the app needs.
**Regarding the `Role` name:**
When I originally designed the API for multiple databases, it wasn't
accidental that handler and role are the same concept. Handler is the
internal concept (since that's what was there already) and Role was the
public external concept. Meaning, role and handler were meant to
be the same thing. The concept here means that when you switch a
handler/role, Rails automatically can pick up the connection on the
other role by knowing the specification name. Changing this would mean not
just that we need to rework how GitHub and many many gems work, but also
means retraining users of Rails 6.0 that all these concepts changed.
Since this PR doesn't move around the concepts in connection
management and instead creates an intermediary between `handler` and
`role` to manage the connection data (`db_config`, `schema_cache`,
`pool`, and `connection_specification`) we think that `Role` and
`RoleManager` are the wrong name.
We didn't change it yet in this PR because we wanted to keep change
churn low for initial review. We also haven't come up with a better
name yet. 😄
**What this PR does not solve:**
Our PR here solves a small portion of the problem - it allows models to
have multiple connections on a class. It doesn't aim to solve any other
problems than that. Going forward we'll need to still solve the
following problems:
* `DatabaseConfig` doesn't support a sharding configuration
* `connects_to`/`connected_to` still needs a way to switch connections for shards
* Automatic switching of shards
* `connection_specification_name` still exists
**The End**
Thanks for reading this far. These problems aren't easy to solve. John
and I spent a lot of time trying different things and so I hope that
this doesn't come across as if we think we know better. I would have
commented on the other PR what changes to make but we needed to try out
different solutions in order to get here.
Ultimately we're aiming to change as little as the API as possible. Even
if the handler/role -> manager -> db_config/pool/etc isn't how we'd
design connection management if we could start over, we also don't want
to break public APIs. It's important that we make things better while
maintaining compatibility.
The `RoleManager` class makes it possible for us to fix the underlying
problem while maintaining all the backwards compatibility in the public
API.
We all have the same goal; to add sharding support to Rails. Let me know
your thoughts on this change in lieu of #37388 and if you have questions.
Co-authored-by: John Crepezzi <seejohnrun@github.com>
- 12a9664ff60 deprecated where.not working as NOR, however
doing a relation query like this `where.not(relation: { })`
wouldn't be properly deprecated and where.not would work as
NAND instead
Common Table Expressions in PostgreSQL allow a different way to define
derived tables. Here's an example from the pg docs:
WITH regional_sales AS (
SELECT region, SUM(amount) AS total_sales
FROM orders
GROUP BY region
), top_regions AS (
SELECT region
FROM regional_sales
WHERE total_sales > (SELECT SUM(total_sales)/10 FROM regional_sales)
)
SELECT region,
product,
SUM(quantity) AS product_units,
SUM(amount) AS product_sales
FROM orders
WHERE region IN (SELECT region FROM top_regions)
GROUP BY region, product;
https://www.postgresql.org/docs/current/queries-with.html
This commit adds the :with keyword to the set of keywords allowed to
begin a query against a `prevent_writes` PostgreSQL connection.
Thx to @kamipo, this also adds the same support for sqlite3 and mysql2
https://www.sqlite.org/lang_with.htmlhttps://dev.mysql.com/doc/refman/8.0/en/with.html
Rather than return a configuration hash from
`resolve_config_for_connection` which also discards the `env_name` and
`spec_name`, return a `DatabaseConfig` object!
Also make this method private since it's only used inside this class.
Co-authored-by: eileencodes <eileencodes@gmail.com>
The intent of #35868 is to make the association loading consistently, it
should not have any side-effect for new instance creation on association
relation.
Fixes#37138.
In order to move schema_cache off of DatabaseConfiguration we needed to
make role accessible on pool.
While looking at `ConnectionSpecification`, `Role`, and `DatabaseConfig` John
and I noticed that this could be achieved by merging `ConnectionSpecification`
and `Role` into one `Role` class. This allows us to eliminate the `spec`
concept which is confusing. `spec` is a private method so renaming
to `resolve_role` is ok.
In the `Role` class we took `name` (renamed to `connection_specification_name`
for clarity since it's not a `role` name) and `db_config` from
`ConnectionSpecification` and the `pool` methods from `Role` and combined
them into one `Role` class.
This feels a lot cleaner to us because it clarifies the purposes of the
classes/methods/variables, and makes it easier to drop
`connection_specification_name` keyed on the parent class in the future.
There are a lot of changes in here but the majority of them are find and
replace `spec` -> `role`, `spec.name` ->
`role.connection_specification_name`, `Resolver#spec` ->
`Resolver#resolve_role`.
This PR also moves the `schema_cache` from `DatabaseConfig` to the new
combined `Role` class.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
This reverts commit f54a8d0d4373c81040412f4ffb1b4e30326a5126.
I am so sorry. I accidentally applied this on top of another PR to do
some testing and then when I merged that one this got merged. I'm
reverting this because we don't need to revert this now.
This reverts commit 4ea7769044748b04c62c374eca6276d67b8c4c43, reversing
changes made to 2ef9ecaf454f69db256da01001ad6cb0d5609072.
I'm reverting this PR for now at Rafael's recommendation because
some of the changes have caused our Rails upgrade at GitHub to
be blocked (we use some private API's) as well as other work to
use the db_config to establish a connection. See PR and comments
here https://github.com/rails/rails/pull/37408 for more detail.
Make has_many inversing support available through an opt-in config
variable. This behaviour is likely to break existing applications, but
it is correct behaviour.
The results is deduplicated when eager loading. However, eager loading
for no :has_many with limit and joins for :has_many do not deal with
this while eager loading for :has_many deals with this.
Fixes#37356
Stop running association callbacks when setting `has_many` inverse
records. The `before_add` and `after_add` callbacks are meant for
newly added records.
Migration::CheckPending is a rack middleware normally used in
development to raise an exception if there pending migrations.
This commit replaces the existing caching, which avoided checking all
migrations if the newest migration (by version number) hadn't changed.
Instead, we now use FileUpdateChecker, which has two advantages: it can
detect new migrations which aren't the highest version, and it is
faster.
If a custom validation context is used, the validations on dependent
association records should fire regardless of whether those record have
`changed_for_autosave?` or not as the use of a custom validation context
increases the chance that validations for the context will be different
to those from when the record was first saved.
6ea80b6 changed the autosave behaviour for single associations such that
validations would only fire on an associated record if the record was
`changed_for_autosave?`. This brought the behaviour inline with that for
collection associations, but introduce a regression in that validations
would no longer fire on dependent associations, even when using a custom
validation context.
This change updates the behaviour for both single and collection
associations.
This commit also updates another related regression test that became a
potential false-positive when 6ea80b6 was merged. The original test was
written to protect against non-custom validations firing on associations
(see #14106). At the time validations on autosaving singular
associations would always fire, even when the association was not dirty,
whereas after 6ea80b6 the validations only fire if the association is
dirty. This update to the test makes the association record dirty to
ensure the validations will fire so that the code responsible for
excluding non-custom contexts is correctly exercised.
Accessing the MySQL manual without a specific version number will redirect to the latest version of the manual.
Remove non functional id from a comment in favor an explicit message
As it is, when the primary key is passed in to `insert_all` as `:unique_by`, that column (or columns) are omitted from the conflict target, resulting in the conflict target being implicitly _all_ unique indexes/constraints on the table. However, in a database like PostgreSQL, some of the constraints considered may be deferred, which are not able to be considered in the context of a conflict target. In such cases, it is helpful to be able to explicitly enumerate the primary key(s) in the conflict target so that _only_ that key is considered when determining the `ON CONFLICT` behavior.
Since 6d1c9a9b23579d13b7cbdd3217ed2cfcc300e06e the `sql.active_record`
notification includes the connection that executed the query, but only
when the result comes from the database, not the query cache.
We are moving to using `DatabaseConfig` objects everywhere inside of
Rails instead of passing around Hash objects.
This PR moves to using `DatabaseConfig` objects inside of the individual
database tasks adapters. In the interest of not breaking existing
adapters, we've introduced an upgrade path by which adapters can get a
hold of database configuration objects instead of Hashes by implementing
a method `self.using_database_configurations?`
Co-authored-by: eileencodes <eileencodes@gmail.com>
When we moved the defaults from hash accessors to methods a few of the
settings got mis-configured. This change fixes the defaults and adds a
hash config test to prevent future regressions.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
Convert all uses of `db_config.configuration_hash[*]` to use methods
defined on an implementation of `DatabaseConfigurations::DatabaseConfig`.
Since we want to get away from accessing properties directly on the
underlying configuration hash, we'll move here to accessing those values
via the implementations on `DatabaseConfig` (or more specifically,
`HashConfig`).
There are still codepaths that are passing around `configuration_hash`,
and follow-on PRs will address those with the goal of using
configuration objects everywhere up until the point we pass a resolved
hash over to the underlying client.
Co-authored-by: eileencodes <eileencodes@gmail.com>
A `UrlConfig` is a `HashConfig` that mixes in connection information
based on a `url` passed from the constructor.
Previously, for each new property we wanted to introduce an accessor
for, we'd need to write an implementation for `HashConfig` _and_ for
`UrlConfig`.
Now we make `UrlConfig` descend from `HashConfig` so we can write a
single implementation for both cases.
Also while we're in here, take the `adapter` implementation into
`HashConfig` so that `DatabaseConfig` isn't making any assumptions about
the structure of its subclasses.
Co-authored-by: eileencodes <eileencodes@gmail.com>
Rollback in `after_commit` accidentally rollback state that already been
succeeded in the previous commit.
To fix this problem, remaining transaction state should immediately be
cleared after a transaction is committed before `after_commit` callbacks
are invoked.
Fixes#37152.
Since #36303 we allow to ignore specific fixtures from being loaded, but
because that introduced a new fixture `other_books` that uses the same
underlying model `Book`, sometimes depending on the run order the tests
might end up not loading the actual `books` fixtures thinking they are
already cached/loaded. Disabling the transactional fixtures for that
test that was introduced above for the ignore fixtures feature fixes
that.
Closes#37244.
Closes#37245.
Thanks @yahonda for finding a way to reproduce these failures.
We have an internal engine whose models, for backwards compatibility with older Rails versions, override #connection_specification_name to connect to a shared DB rather than using Rails 6 multi-DB support.
b8fc0150 changed ActiveRecord::ConnectionAdapters::ConnectionPool#retrieve_connection_pool to pass ActiveRecord::DatabaseConfigurations::DatabaseConfig#config_hash to #establish_connection where it previously passed ActiveRecord::ConnectionAdapters::ConnectionSpecification#to_hash. But DatabaseConfig#config_hash does not include the specification name, so #establish_connection falls back to `"primary"`. The host app's primary DB connections and our engine's shared DB connections get mixed in the same pool as a result. Engine models will sometimes erroneously check out app DB connections and app models will sometimes check out engine DB connections.
To fix, we merge the connection specification name into the DB config hash before passing it into ConnectionPool#establish_connection.
We're moving in the direction of getting `DatabaseConfig` to be used in
more places instead of `Hash` connections configurations being passed
around directly.
Here we're making all of `DatabaseTasks` and `databases.rake` use
database configuration objects.
A few notes:
* This also contains some tests that were missing for public methods on
`DatabaseTasks`: `charset_current` & `collation_current`
* There is a deprecation of some arguments in `schema_up_to_date?` that
are now duplicative/confusing. One of them wasn't used previously
(`environment`), and now `spec_name` can be pulled directly off of the
`db_config (DatatabaseConfig)` object.
Co-authored-by: eileencodes <eileencodes@gmail.com>
If a relation has eager_load and string joins only, string joins will be
regarded as leading joins unlike before, due to #36805.
To maintain that joining order as before, check eager loading join
first before string joins.
Fixes#37133.
These should have been deprecated when I added them but for some reason
I didn't.
As we move away from passing hashes around we no longer need these
methods, and since we no longer use configuration hashes as database
configuration we can deprecate this in favor of using the database
configuration objects and access the connection hashes from there.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
Uses of `current_config` and `current_config=` were removed when
implementing multiple datatabase Rails tasks. This means they are no
longer used internally. However, since it is a documented method it may
be used in gems or applications. This change deprecates these methods so
that we can remove them in 6.2.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
This commit fixes a method typo that was introduced in #37199, also also
adds tests for the `DatabaseTasks#current_config` method given it's
current implementation.
While I was in here, I also made it so that if `current_config` is
called with an `env` it won't try to look up the `Rails.env`
potentially resulting in an error for non-Rails application.
IMO this method should be deprecated & removed as it's no longer used in
Rails, is confusing (for example: `test_current_config_read_after_set`),
and is currently dependent on Rails if `env` isn't passed.
Co-authored-by: eileencodes <eileencodes@gmail.com>
Eventually we'd like to get rid of this class altogether but for now
this PR reduces the surface area by removing methods from the class and
moving classes out into their own files.
* `adapter_method` was moved into database configurations
* `initialize_dup` was removed because it was only used in tests
* Resolver is now it's own class under connection adapters
* ConnectionUrlResolver, only used by the configurations, is in a class
under DatabaseConfigurations
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
This is part 1 of removing the need to pass hashes around inside
connection management. This PR creates database config objects every
time when passing a hash, string, or symbol and sends that object to
connection specification. ConnectionSpecification reaches into the
config hash on db_config when needed, but eventually we'll get rid of
that and ConnectionSpecification since it's doing duplicate work with
the DatabaseConfigurations.
We also chose to change the keys to strings because that's what the
database.yml would create and what apps currently expect. While symbols
are nicer, we'd end up having to deprecate the string behavior first.
Co-authored-by: eileencodes <eileencodes@gmail.com>
Fixes a bug where has_one through relations with non-integer primary
keys were incorrectly raising an implementation error even though a
reflectable association was present.
Previously in some places we used symbol keys, and in some places we used
string keys. That made it pretty confusing to figure out in a particular
place what type of configuration object you were working with.
Now internally, all configuration hashes are keyed by symbols and
converted to such on the way in.
A few exceptions:
- `DatabaseConfigurations#to_h` still returns strings for backward compatibility
- Same for `legacy_hash`
- `default_hash` previously could return strings, but the associated
comment mentions it returns symbol-key `Hash` and now it always does
Because this is a change in behavior, a few method renames have happened:
- `DatabaseConfig#config` is now `DatabaseConfig#configuration_hash` and returns a symbol-key `Hash`
- `ConnectionSpecification#config` is now `ConnectionSpecification#underlying_configuration_hash` and returns the `Hash` of the underlying `DatabaseConfig`
- `DatabaseConfig#config` was added back, returns `String`-keys for backward compatibility, and is deprecated in favor of the new `configuration_hash`
Co-authored-by: eileencodes <eileencodes@gmail.com>
When this case was his previously, an error would be thrown like:
`<NoMethodError: "undefined method config for...">` because we were
erroneously calling `config` on a `Hash`.
This commit adds a test for this case, and fixes the hash access.
These tests are causing the Ruby master build to fail in CI. The Docker
image we use is based on Ubuntu Bionic which provides SQLite 3.22.0, and
the features required to use `insert_all` and `upsert_all` are not
available in that version.