Related to #39250 and #39236.
The purpose of the change here is to unify inconsistent behavior on the
merging.
For now, mergee side condition is replaced by merger side condition only
if both arel nodes are Equality or In clauses.
In other words, if mergee side condition is not Equality or In clauses
(e.g. `between`, `or`, `gt`, `lt`, etc), those conditions will be kept
even on the same column.
This behavior is harder to predict unless people are familiar with the
merging behavior.
Originally I suppose this behavior is just an implementation issue
rather than an intended one, since `unscope` and `rewhere`, which were
introduced later than `merge`, works more consistently.
Since most of the conditions are usually Equality and In clauses, I
don't suppose most people have encountered this merging issue, but I'd
like to deprecate the inconsistent behavior and will completely unify
that to improve a future UX.
```ruby
# Rails 6.1 (IN clause is replaced by merger side equality condition)
Author.where(id: [david.id, mary.id]).merge(Author.where(id: bob)) # => [bob]
# Rails 6.1 (both conflict conditions exists, deprecated)
Author.where(id: david.id..mary.id).merge(Author.where(id: bob)) # => []
# Rails 6.1 with rewhere to migrate to Rails 6.2's behavior
Author.where(id: david.id..mary.id).merge(Author.where(id: bob), rewhere: true) # => [bob]
# Rails 6.2 (same behavior with IN clause, mergee side condition is consistently replaced)
Author.where(id: [david.id, mary.id]).merge(Author.where(id: bob)) # => [bob]
Author.where(id: david.id..mary.id).merge(Author.where(id: bob)) # => [bob]
```
For now, `increment` with aliased attribute does work, but `increment!`
with aliased attribute does not work, due to `clear_attribute_change` is
not aware of attribute aliases.
We sometimes partially updates specific attributes in dirties, at that
time it relies on `clear_attribute_change` to clear partially updated
attribute dirties. If `clear_attribute_change` is not attribute method
unlike others, we need to resolve attribute aliases manually only for
`clear_attribute_change`, it is a little inconvinient for me.
From another point of view, we have `restore_attributes`,
`restore_attribute!`, `clear_attribute_changes`, and
`clear_attribute_change`. Despite almost similar features
`restore_attribute!` is an attribute method but `clear_attribute_change`
is not.
Given the above, I'd like to promote `clear_attribute_change` as
attribute methods to fix issues caused by the inconsisteny.
Based on the docs which state unique_by option of insert_all can use the
index name if desired, one would expect the method to work normally and
use the `unique_by` option to determine duplicates.
However, there's an issue where the insert_all expects a Set instead
of the string representing the index expression it is given. This causes
an error. Returning the string expression instead of attempting to
format it works perfectly though.
Related to #39495.
For now, `read_attribute`, `write_attribute`, `[]`, `[]=` are aware of
attribute aliases, but `has_attribute?` is not. It will easily miss
attribute alias resolution before using `has_attribute?`, it is very
inconvenient.
I think the inconvenience is not intended, so `has_attribute?` should be
aware of attribute aliases like as others for consistency.
For now, the target attribute allows attribute aliases, but `:scope`'s
attribute does not, the cause is that the former use
`read_attribute_for_validation`, but the latter does not.
Unfortunately we cannot use `read_attribute_for_validation` in this
case, it intentionally bypass custom attribute getter to allow #7072.
To work both alias and #7072, `read_attribute` should be used to resolve
attribute aliases.
For now, timestamp magic columns are only allowed for real physical
columns, it is not a matter for newly created app, but it is harder to
get the usefulness for legacy databases.
The reason that doesn't work is some low-level API does not care about
attribute aliases. I think that uses low-level API without attribute
alias resolution for timestamp attributes is not intended (e.g.
`updated_at_before_type_cast` works, but `has_attribute?("updated_at")`
and `_read_attribute("updated_at")` doesn't work).
I've addressed all missing attribute alias resolution for timestamp
attributes to work that consistently.
Fixes#37554.
This will allow to enable/disable strict_loading mode by default for a model.
The configuration's value is inheritable by subclasses, but they can override that value and
it will not impact parent class:
```ruby
class Developer < ApplicationRecord
self.strict_loading_by_default = true
has_many :projects
end
dev = Developer.first
dev.projects.first
\# => ActiveRecord::StrictLoadingViolationError Exception: Developer is marked as strict_loading and Project cannot be lazily loaded.
```
What is great about this feature that it could help users to nip N+1 queries in
the bud, especially for fresh applications, by setting
`ActiveRecord::Base.strict_loading_by_default = true` / `config.active_record.strict_loading_by_default = true`.
That is also a great way to prevent new N+1 queries in the existing applications
after all the N+1 queries are eliminated.
(See https://guides.rubyonrails.org/v6.0/active_record_querying.html#eager-loading-associations,
https://github.com/seejohnrun/prelude for details on how to fight against N+1 queries).
Related to https://github.com/rails/rails/pull/37400, https://github.com/rails/rails/pull/38541
```ruby
class Post < ActiveRecord::Base
alias_attribute :alias_id, :id
end
Benchmark.ips do |x|
x.report("find_by alias") do
Post.find_by(alias_id: 1)
end
end
```
Before:
```
Warming up --------------------------------------
find_by alias 419.000 i/100ms
Calculating -------------------------------------
find_by alias 4.260k (± 3.9%) i/s - 21.369k in 5.023768s
```
After:
```
Warming up --------------------------------------
find_by alias 1.182k i/100ms
Calculating -------------------------------------
find_by alias 12.122k (± 4.1%) i/s - 61.464k in 5.080451s
```
This commit does the following:
- Hides `StrictLoadingScope` on API doc(https://edgeapi.rubyonrails.org/)
- Documents `ActiveRecord::Base#strict_loading?` and `ActiveRecord::Base#strict_loading!`
methods.
- Adds the test case for `ActiveRecord::Base#strict_loading!` since it is
a public API.
Follow up to #27962.
#27962 only deprecated `quoted_id`, but still conservatively allowed
passing an Active Record object.
Since the quoting methods on a `connection` are low-level API and
querying API does not rely on that ability, so people should pass casted
value instead of an Active Record object if using the quoting methods
directly.
The signed id feature introduced in #39313 can cause loading issues
since it may try to generate a key before the secret key base has
been set. To prevent this wrap the secret initialization in a lambda.
#39378 has changed to use `build_scope` in `join_scopes`, which rely on
`reflection.klass`, but `reflection.klass` is restricted for polymorphic
association, the klass for the association should be passed explicitly.
Related to #7380 and #7392.
`merge` allows to overwrite non-attribute nodes by #7392, so
`merge(..., rewhere: true)` should also have the same ability, to
migrate from the half-baked current behavior to entirely consistent new
behavior.
An error occurs when you pass a relation with SQL comments to the `or` method.
```ruby
class Post
scope :active, -> { where(active: true).annotate("active posts") }
end
Post.where("created_at > ?", Time.current.beginning_of_month)
.or(Post.active)
```
In order to work without `ArgumentError`, it changes the `or` method to ignore
SQL comments in the argument.
Ref: https://github.com/rails/rails/pull/38145#discussion_r363024376
Before #36604, `enum` and `set` columns were incorrectly dumped as a
`string` column.
If an `enum` column is defined as `foo enum('apple','orange')`, it was
dumped as `t.string :foo, limit: 6`, the `limit: 6` is seemed to
restrict invalid string longer than `'orange'`.
But now, `enum` and `set` columns are correctly dumped as `enum` and
`set` columns, the limit as longest element is no longer used.
5 years ago, I made dumping full table options at #17569, especially to
dump `ENGINE=InnoDB ROW_FORMAT=DYNAMIC` to use utf8mb4 with large key
prefix.
In that time, omitting the default engine `ENGINE=InnoDB` was not useful
since `ROW_FORMAT=DYNAMIC` always remains as long as using utf8mb4 with
large key prefix.
But now, MySQL 5.7.9 has finally changed the default row format to
DYNAMIC, utf8mb4 with large key prefix can be used without dumping the
default engine and the row format explicitly.
So now is a good time to make the default engine is omitted.
Before:
```ruby
create_table "accounts", options: "ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci", force: :cascade do |t|
end
```
After:
```ruby
create_table "accounts", charset: "utf8mb4", collation: "utf8mb4_0900_ai_ci", force: :cascade do |t|
end
```
To entirely omit `:options` option to make schema agnostic, I've added
`:charset` and `:collation` table options to exclude `CHARSET` and
`COLLATE` from `:options`.
Fixes#26209.
Closes#29472.
See also #33608, #33853, and #34742.
We've learned that `merge` causes duplicated multiple values easily, so
if we missed to deduplicate the values, it will cause weird behavior
like #38052, #39171.
I've investigated the deduplication for the values, at least that had
existed since Rails 3.0.
bed9179aa1
Aggregations with group by multiple fields was introduced at Rails 3.1,
but we missed the deduplication for the aggregation result, unlike the
generated SQL.
a5cdf0b9eb
While the investigation, I've found that `annotate` is also missed the
deduplication.
I don't suppose this weird behavior is intended for both.
So I'd like to deprecate the duplicated behavior in Rails 6.1, and will
be deduplicated all multiple values in Rails 6.2.
To migrate to Rails 6.2's behavior, use `uniq!(:group)` to deduplicate
group fields.
```ruby
accounts = Account.group(:firm_id)
# duplicated group fields, deprecated.
accounts.merge(accounts.where.not(credit_limit: nil)).sum(:credit_limit)
# => {
# [1, 1] => 50,
# [2, 2] => 60
# }
# use `uniq!(:group)` to deduplicate group fields.
accounts.merge(accounts.where.not(credit_limit: nil)).uniq!(:group).sum(:credit_limit)
# => {
# 1 => 50,
# 2 => 60
# }
```
Related to #39328, #39358.
For now, `merge` cannot override non-equality clauses, so non-equality
clauses will easily be duplicated by `merge`.
This deduplicates redundant same clauses in `merge`.
If source/through scope references other tables in where/order, we
should explicitly maintain joins in the scope, otherwise association
queries will fail with referenced unknown column.
Fixes#33525.
`reflection.scope` is not aware of all source scopes if the association
is through association.
It should use `reflection.join_scopes` for that.
Fixes#39376.
Bump an Active Record instance's lock version after updating its counter
cache. This avoids raising an unnecessary ActiveRecord::StaleObjectError
upon subsequent transactions by maintaining parity with the
corresponding database record's lock_version column.
The `index_exists?` method wasn't very specific so when we added the
`if_not_exists` to `add_index` and `if_exists` to `remove_index` there
were a few cases where behavior was unexpected.
For `add_index` if you added a named index and then added a second index
with the same columns, but a different name, that index would not get
added because `index_exists` was looking only at column named and not at
the exact index name. We fixed `add_index` by moving the `index_exists`
check below `add_index_options` and pass `name` directly to
`index_exists` if there is a `if_not_exists` option.
For `remove_index` if you added a named index and then tried to remove
it with a nil column and a explicit name the index would not get removed
because `index_exists` saw a nil column. We fixed this by only doing the
column check in `index_exists` if `column` is present.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
If a has_many :through association isn't found we can suggest similar associations:
```
class Author
has_many :categorizations, -> { }
has_many :categories, through: :categorizations
has_many :categorized_posts, through: :categorizations, source: :post
has_many :category_post_comments, through: :categories, source: :post_comments
has_many :nothings, through: :kateggorisatons, class_name: "Category"
end
Author.first.nothings
Could not find the association :kateggorisatons in model Author
Did you mean? categorizations
categories
categorized_posts
category_post_comments
```
`HomogeneousIn` has changed merging behavior for NOT IN clause from
before. This changes `equality?` to return true only if `type == :in` to
restore the original behavior.
Related #39236.
`relation.merge` method sometimes replaces mergee side condition, but
sometimes maintain both conditions unless `relation.rewhere` is used.
It is very hard to predict merging result whether mergee side condition
will be replaced or not.
One existing way is to use `relation.rewhere` for merger side relation,
but it is also hard to predict a relation will be used for `merge` in
advance, except one-time relation for `merge`.
To address that issue, I propose to support merging option `:rewhere`,
to allow mergee side condition to be replaced exactly.
That option will allow non-`rewhere` relation behaves as `rewhere`d
relation.
```ruby
david_and_mary = Author.where(id: david.id..mary.id)
# both conflict conditions exists
david_and_mary.merge(Author.where(id: bob)) # => []
# mergee side condition is replaced by rewhere
david_and_mary.merge(Author.rewhere(id: bob)) # => [bob]
# mergee side condition is replaced by rewhere option
david_and_mary.merge(Author.where(id: bob), rewhere: true) # => [bob]
```
If an association isn't found we can suggest matching associations:
```
Post.all.merge!(includes: :monkeys).find(6)
Association named 'monkeys' was not found on Post; perhaps you misspelled it?
Did you mean? funky_tags
comments
images
skimmers
```
Add support for finding records based on signed ids, which are tamper-proof, verified ids that can be set to expire and scoped with a purpose. This is particularly useful for things like password reset or email verification, where you want the bearer of the signed id to be able to interact with the underlying record, but usually only within a certain time period.
This also removes the `if @transaction_state&.finalized?` guard which is
harder to understand optimization introduced at #36049. The guard is
faster enough though, but removing that will make attribute access about
2% ~ 4% faster, and will make code base to ease to maintain.
`sync_with_transaction_state` was introduced at #9068 to address memory
bloat when creating lots of AR objects inside a transaction.
I've found #18638 the same design of this to address memory bloat, but
this differs from #18638 in that it will allocate one `WeakMap` object
only when explicit transaction, no extra allocation for implicit
transaction.
Executable script to reproduce memory bloat:
https://gist.github.com/kamipo/36d869fff81cf878658adc26ee38ea1bhttps://github.com/rails/rails/issues/15549#issuecomment-46035848
I can see no memory concern with this.
Co-authored-by: Arthur Neves <arthurnn@gmail.com>
#38354 is caused by #36304, to fix invalid joins order for through
associations.
Actually passing Arel joins to `joins` is not officially supported
unlike string joins, and also Arel joins could be easily converted to
string joins by `to_sql`. But I'd not intend to break existing apps
without deprecation cycle, so I've changed to mark only implicit joins
as leading joins, to maintain the original joins order for user supplied
Arel joins.
Fixes#38354.
- Calling `Blog.where(title: ['foo', 'bar']).where_values_hash` now
returns an empty hash.
This is a regression since 72fd0bae59 .
`Arel::Node::HomogeousIn` isn't a `EqualityNode`, the `WhereClause`
didn't had a case for this.
I decide to not make `HomogeousIn` inherit from `EqualityNode`,
because there is a comment questioning it for `In` 57d926a78a/activerecord/lib/arel/nodes.rb (L31)
Intead I just modified the `WhereClause` case and implemented
`right` on the node which is needed by `where_value_hash` 57d926a78a/activerecord/lib/active_record/relation/where_clause.rb (L59)
Follow up to #39264, and fixes demonstrated case in #39290.
If the column has no type caster and the model don't know the attribute,
let's will attempt to lookup cast type from join dependency tree.
For now argument forwarding doesn't allow some keywords like `true` as a
method name.
To bypass the issue, fallback to `define_method` if method names are
Ruby reserved keywords.
https://bugs.ruby-lang.org/issues/16854
```ruby
class Works
def true(*args)
puts(*args)
end
end
Works.new.true 1, 2, 3
# => 1, 2, 3
class WontWork
def true(...)
puts(...)
end
end
```
```
% ruby a.rb
a.rb:12: syntax error, unexpected ..., expecting ')'
def true(...)
a.rb:13: unexpected ...
a.rb:15: syntax error, unexpected `end', expecting end-of-input
```
Follow up to #39255, #39039.
One of the purpose of this was to unify the behavior between the
databases.
Original code was:
```ruby
type = result.column_types.fetch(column_alias) do
type_for(column_name)
end
```
The code will attempt looking up type from `column_types`, then fallback
to attribute types, so I supposed the code was originally intended to
cast a value by the database types.
But now, most modern clients will already return casted values, and no
longer use `column_types`, except Postgres.
As a result, now most adapter accidentally fallback to attribute types.
Since casted by attribute types sometimes doesn't return numeric values,
I've unified the behavior to use database types consistently in #39039.
But later, I've learned that attribute types have important settings
like time zone aware attributes (#39255), and some existing code relying
on attribute types over database types (#39271).
I've changed all aggregated values are casted by attribute types.
Fixes#39271.
I can't for the life of me reproduce the failures occurring on buildkite
but I beleive this is the fix. We need to only run this on sqlite3
because we are using a sqlite3 database.
This moves the previous test into the old test and reuses the connection
that that test establishes rather than requiring we muck with temporary
connection pool.
The change here is more correct than the previous code since we're
establishing new connections we should be checking the newly established
reading and writing connections are the same, not checking against the
existing ActiveRecord::Base.connection.
The test here also most closely emulates a real application using
multiple databases.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
#38597 is caused by #35864.
To reproduce this issue, at least it is required four different models
and three left joins from different relations.
When merging a relation from different model, new stashed (left) joins
should be placed before existing stashed joins, but #35864 had broken
that expectation if left joins are stashed multiple times.
This fixes that stashed left joins order as expected.
Fixes#38597.
That issues are caused by using only the model's cast types on the
relation.
To fix that issues, use the attribute's type caster takes precedence
over the model's cast types on the relation.
Fixes#35232.
Fixes#36042.
Fixes#37484.
Follow up of #39255.
Previously aggregation functions only use the model's attribute types on
the relation for type cast, this will be looking up association's
attribute and type caster if a column name is table name qualified.
Fixes#39248.
Since we're checking `serializable?` in the new `HomogeneousIn`
`serialize` will no longer raise an exception. We implemented
`unchecked_serialize` to avoid raising in these cases, but with some of
our refactoring we no longer need it.
I discovered this while trying to fix a query in our application that
was not properly serializing binary columns. I discovered that in at
least 2 of our active model types we were not calling the correct
serialization. Since `serialize` wasn't aliased to `unchecked_serialize`
in `ActiveModel::Type::Binary` and `ActiveModel::Type::Boolean` (I
didn't check others but pretty sure all the AM Types are broken) the SQL
was being treated as a `String` and not the correct type.
This caused Rails to incorrectly query by string values. This is
problematic for columns storing binary data like our emoji columns at
GitHub. The test added here is an example of how the Binary type was
broken previously. The SQL should be using the hex values, not the
string value of "🥦" or other emoji.
We still have the problem `unchecked_serialize` was supposed to fix -
that `serialize` shouldn't validate data, just convert it. We'll be
fixing that in a followup PR so for now we should use `serialize` so we
know all the values are going through the right serialization for their
SQL.
This is the opposite direction of #39039.
#39111 fixes `minimum` and `maximum` on date columns with type casting
by column type on the database. But column type has no information for
time zone aware attributes, it means that attribute type should always
be precedence over column type. I've realized that fact in the related
issue report #39248.
I've reverted the expectation of #39039, to make time zone aware
attributes works.
The uncastable through reflection check should be testing for foreign
key type overrides on the join model and not a non-integer type on the
through primary key.
Per [this discussion][arel-discussion] on the discourse forum, this is
an addition to Arel for supporting `@>` (contains) and `&&` (overlaps)
operators in PostgreSQL. They are useful for GIN-indexed data such as a
`jsonb` or array column.
[arel-discussion]: https://discuss.rubyonrails.org/t/what-has-happened-to-arel/74383/51
`ReadOnlyTest#test_field_named_field` performs implicit commit the transaction by `ReadOnlyTest#setup`
because of the MySQL database behavior.
This commit addresses the failure at https://buildkite.com/rails/rails/builds/68962#68213887-1cef-4f76-9c95-aebc8799c806
Here are minimum steps to reproduce:
```ruby
% ARCONN=mysql2 bin/test test/cases/readonly_test.rb test/cases/dirty_test.rb test/cases/associations/inner_join_association_test.rb \
-n "/^(?:ReadOnlyTest#(?:test_has_many_with_through_is_not_implicitly_marked_readonly)|DirtyTest#(?:test_field_named_field)|InnerJoinAssociationTest#(?:test_eager_load_with_string_joins))$/" --seed 50855
Using mysql2
Run options: -n "/^(?:ReadOnlyTest#(?:test_has_many_with_through_is_not_implicitly_marked_readonly)|DirtyTest#(?:test_field_named_field)|InnerJoinAssociationTest#(?:test_eager_load_with_string_joins))$/" --seed 50855
..F
Failure:
InnerJoinAssociationTest#test_eager_load_with_string_joins [/Users/yahonda/src/github.com/rails/rails/activerecord/test/cases/associations/inner_join_association_test.rb:87]:
Expected: 3
Actual: 4
bin/test test/cases/associations/inner_join_association_test.rb:82
Finished in 0.114674s, 26.1611 runs/s, 26.1611 assertions/s.
3 runs, 3 assertions, 1 failures, 0 errors, 0 skips
```
References:
- "13.3.3 Statements That Cause an Implicit Commit"
https://dev.mysql.com/doc/refman/8.0/en/implicit-commit.html
datetime with precision was passing assert_no_microsecond_precision
unintentionally, because `/\d\z/` is match to both datetime with
precision and datetime without precision. Fixed that by changing
time and regex to make it easier to grasp with and without precision.
Fix stub_version to consider schema_cache. ref: #35795
Remove failing test for unsupported version of MariaDB. ref: fb6743a
Before IN clause optimization 70ddb8a, Active Record had generated an
SQL with binds when `prepared_statements: true`:
```ruby
# prepared_statements: true
#
# SELECT `authors`.* FROM `authors` WHERE `authors`.`id` IN (?, ?, ?)
#
# prepared_statements: false
#
# SELECT `authors`.* FROM `authors` WHERE `authors`.`id` IN (1, 2, 3)
#
Author.where(id: [1, 2, 3]).to_a
```
But now, binds in IN clause is substituted regardless of whether
`prepared_statements: true` or not:
```ruby
# prepared_statements: true
#
# SELECT `authors`.* FROM `authors` WHERE `authors`.`id`IN (1,2,3)
#
# prepared_statements: false
#
# SELECT `authors`.* FROM `authors` WHERE `authors`.`id`IN (1,2,3)
#
Author.where(id: [1, 2, 3]).to_a
```
I suppose that is considered as a regression for the context:
> While I would prefer that we fix/avoid the too-many-parameters
problem, but I don't like the idea of globally ditching bind params for
this edge case... we're getting to the point where I'd almost consider
anything that doesn't use a bind to be a bug.
https://github.com/rails/rails/pull/33844#issuecomment-421000003
This makes binds consider whether `prepared_statements: true` or not
(i.e. restore the original behavior as before), but still gain that
optimization when need the substitute binds (`prepared_statements: false`,
`relation.to_sql`). Even when `prepared_statements: true`, it still
much faster than before by optimized (bind node less) binds generation.
```ruby
class Post < ActiveRecord::Base
end
ids = (1..1000).each.map do |n|
Post.create!.id
end
puts "prepared_statements: #{Post.connection.prepared_statements.inspect}"
Benchmark.ips do |x|
x.report("where with ids") do
Post.where(id: ids).to_a
end
end
```
* Before (200058b0113efc7158432484d71c1a4f1484a4a1)
`prepared_statements: true`:
```
Warming up --------------------------------------
where with ids 6.000 i/100ms
Calculating -------------------------------------
where with ids 63.806 (± 7.8%) i/s - 318.000 in 5.015903s
```
`prepared_statements: false`:
```
Warming up --------------------------------------
where with ids 7.000 i/100ms
Calculating -------------------------------------
where with ids 73.550 (± 8.2%) i/s - 371.000 in 5.085672s
```
* Now with this change
`prepared_statements: true`:
```
Warming up --------------------------------------
where with ids 9.000 i/100ms
Calculating -------------------------------------
where with ids 91.992 (± 7.6%) i/s - 459.000 in 5.020817s
```
`prepared_statements: false`:
```
Warming up --------------------------------------
where with ids 10.000 i/100ms
Calculating -------------------------------------
where with ids 104.335 (± 8.6%) i/s - 520.000 in 5.026425s
```
Follow up of #34122.
Relation method call is relying on method_missing, but if `Kernel` has
the same named method (e.g. `open`, etc), it will invoke Kernel's method
since method_missing is not happened.
To prevent that, eager generate relation methods if a method is the same
name on `Kernel`.
Fixes#39195.
We fixed `generate_relation_method` to address kwargs warnings at
#38038, but I missed generated named scopes also need the same fix.
Test case has picked from #39196.
Co-authored-by: John Hawthorn <john@hawthorn.email>
Actually that result is odd and hard to predictable result to me, but we
should not change the public behavior without deprecation cycle.
I had not intended to break any apps, so I've restored the behavior.
Fixes#39171.
Follow-up to #39147 and #39168.
By adding a new purpose-specific format, we avoid potential pitfalls
from concatenating format strings. We also save a String allocation per
Time attribute per inspect.
The new format also includes a time zone offset for more introspective
inspection.
In the AR test suite require_dependency does not make much sense. Just
call vanilla require/load.
Note that in the test that made explicit use of it, there are no
autoload paths, and no constants have been autoloaded. In reality, the
code ended up calling Kernel#load.
SQLite3 does not recognize paths as file URIs unless the
`SQLite3::Constants::Open::URI` flag is set. Therefore, without this
flag, a path like "file::memory:" is interpreted as a filename, causing
a "file::memory:" file to be created and used as the database. Most
tests in `SQLite3TransactionTest` picked up this flag from
`shared_cache_flags`, but a few did not. Because those tests were
creating a file, the path was changed in #38620 such that it no longer
pointed to an in-memory database.
This commit restores the database path as "file::memory:" and ensures
the URI flag is set whenever `in_memory_db?` is true.
This reverts commit 9817d74f3be72d8e685301bfd0acb6a12b9cdda9, reversing
changes made to d326b029e0d3cd649d80a484ceb5138475d3601d.
Just making this easier to merge our PR in. Otherwise there's tons of
conflicts and our PR is faster.
Some commits adds `**` to address kwargs warnings in Ruby 2.7, but
`save` and `save!` are originally doesn't take positional arguments, so
maintain both `*` and `**` is redundant.
6d68bb5f6909d7ce797551a7422c9f
```ruby
steve = Person.find_by(name: "Steve")
david = Author.find_by(name: "David")
relation = Essay.where(writer: steve)
# Before
relation.rewhere(writer: david).to_a # => []
# After
relation.rewhere(writer: david).to_a # => [david]
```
For now `rewhere` only works for truly column names, doesn't work for
alias attributes, nested conditions, associations.
To fix that, need to build new where clause first, and then get
attribute names from new where clause.
To change a NOT NULL constraint `reversible`.
When changing a NOT NULL constraint, we use `ActiveRecord::ConnectionAdapters::SchemaStatements#change` method that is not reversible, so `up` and `down` methods were required. Actually, we can use `change_column_null` method if only one constraint changed, but if we want to change multiple constarints with ALTER QUERY, `up` and `down` methods were required.
Example failure: https://buildkite.com/rails/rails/builds/68661#84f8790a-fc9e-42ef-a7fb-5bd15a489de8/1002-1012
The failing `destroyed_by_association` tests create an author (a
DestroyByParentAuthor) and a book (a DestroyByParentBook) that belongs
to that author. If the database already contains books that refer to
that author's ID from previous tests (i.e. tests that disabled
`use_transactional_tests`), then one of those books will be loaded and
destroyed instead of the intended DestroyByParentBook book.
By loading the `:books` fixtures, we ensure the database does not
contain such unexpected books.
Co-authored-by: Eugene Kenny <elkenny@gmail.com>
Co-authored-by: Ryuta Kamizono <kamipo@gmail.com>
Since 901d62c586c20ab38b0f18f4bd9a4419902768c4, associations can only be
autosaved once: after a record has been saved, `@new_record_before_save`
will always be false. This assumes that records only transition to being
persisted once, but there are two cases where it happens multiple times:
when the transaction that saved the record is rolled back, and when the
persisted record is later duplicated.
I supposed all aggregation functions will return numeric result in
#39039, but that assumption was incorrect for `minimum` and `maximum`,
if an aggregated column is non numeric type.
I've restored type casting aggregated result for `minimum` and `maximum`.
Fixes#39110.
The type information for type casting is entirely separated to type
object, so if anyone does passing a column to `type_cast` in Rails 6,
they are likely doing something wrong. See the comment for more details:
28d815b894/activerecord/lib/active_record/connection_adapters/abstract/quoting.rb (L33-L42)
This also deprecates passing legacy binds (an array of `[column, value]`
which is 4.2 style format) to query methods on connection. That legacy
format was kept for backward compatibility, instead of that, I've
supported casted binds format (an array of casted values), it is easier
to construct binds than existing two binds format.
I've found the internal `without_transaction_enrollment` callbacks which
have not been newly used over five years, when I tried to work reverting
#9068 (https://github.com/rails/rails/pull/36049#issuecomment-487318060).
I think that we will never make that callbacks public, since the
mechanism of `without_transaction_enrollment` is too implementation
specific, at least before #9068, records in a transaction had enrolled
all into the transaction.
That callbacks was introduced at #18936 to make `touch_later` #19324,
but I think that the internal callbacks is overkill to just make the
`touch_later` only, and invoking the extra callbacks also have a little
overhead even if we haven't used that.
So I think we can remove the internal callbacks for now, until we will
come up with a good use for that callbacks.
`column_types` is empty except PostgreSQL adapter, and
`attribute_types.each_key { |k| column_types.delete k }` is also empty
even if PostgreSQL adapter almost all case, so that code is quite
useless. This improves performance for `find_by_sql` to avoid that
useless loop as much as possible.
```ruby
ActiveRecord::Schema.define do
create_table :active_storage_blobs do |t|
t.string :key, null: false
t.string :filename, null: false
t.string :content_type
t.text :metadata
t.string :service_name, null: false
t.bigint :byte_size, null: false
t.string :checksum, null: false
t.datetime :created_at, null: false
t.index [ :key ], unique: true
end
end
class ActiveStorageBlob < ActiveRecord::Base
end
Benchmark.ips do |x|
x.report("find_by") { ActiveStorageBlob.find_by(id: 1) }
end
```
Before:
```
Warming up --------------------------------------
find_by 1.256k i/100ms
Calculating -------------------------------------
find_by 12.595k (± 3.4%) i/s - 64.056k in 5.091599s
```
After:
```
Warming up --------------------------------------
find_by 1.341k i/100ms
Calculating -------------------------------------
find_by 13.170k (± 3.5%) i/s - 67.050k in 5.097439s
```
To avoid column types loop for PostgreSQL adapter, this skips returning
additional column types if a column has already been type casted by pg
decoders. Fortunately this fixes#36186 partly for common types.
Relying on the `Arel::Table.engine` is convenient if an app have only a
single kind of database, but if not so, the global state is not always
the same with the current connection.
`allowed_index_name_length` was used for internal temporary operations
in SQLite3, since index name in SQLite3 must be globally unique and
SQLite3 doesn't have ALTER TABLE feature (so it is emulated by creating
temporary table with prefix).
`allowed_index_name_length` was to reserve the margin for the prefix,
but actually SQLite3 doesn't have a limitation for identifier name
length, so the margin has removed at 36901e6.
Now `allowed_index_name_length` is no longer relied on by any adapter,
so I'd like to remove the internal specific method which is no longer
used.
This is a smaller alternative of performance improvement, without
refactoring type casting mechanism #39009.
This is relatively a smaller change (but about 40% faster than before),
so I think this could be easier reviewed without discuss about
refactoring type casting mechanism.
This just makes `attribute.in(values)` less allocation from an array of
casted nodes to one casted array node.
```ruby
ids = (1..1000).each.map do |n|
Post.create!.id
end
Benchmark.ips do |x|
x.report("where with ids") do
Post.where(id: ids).to_a
end
x.report("where with sanitize") do
Post.where(ActiveRecord::Base.sanitize_sql(["id IN (?)", ids])).to_a
end
x.compare!
end
```
Before:
```
Warming up --------------------------------------
where with ids 7.000 i/100ms
where with sanitize 13.000 i/100ms
Calculating -------------------------------------
where with ids 70.661 (± 5.7%) i/s - 357.000 in 5.072771s
where with sanitize 130.993 (± 7.6%) i/s - 663.000 in 5.096085s
Comparison:
where with sanitize: 131.0 i/s
where with ids: 70.7 i/s - 1.85x slower
```
After:
```
Warming up --------------------------------------
where with ids 10.000 i/100ms
where with sanitize 13.000 i/100ms
Calculating -------------------------------------
where with ids 98.174 (± 7.1%) i/s - 490.000 in 5.012851s
where with sanitize 132.289 (± 8.3%) i/s - 663.000 in 5.052728s
Comparison:
where with sanitize: 132.3 i/s
where with ids: 98.2 i/s - 1.35x slower
```
`in_clause_length` was added at c5a284f to address to Oracle IN clause
length limitation.
Now `in_clause_length` is entirely integrated in Arel visitor since
#35838 and #36074.
Since Oracle visitors are the only code that rely on `in_clause_length`.
so I'd like to remove that from Rails code base, like has removed Oracle
visitors (#38946).
Before this, 1000 `Or` nodes will raise "stack level too deep" due to
visiting too deep Arel ast.
This makes more concise Arel `Or` ast and `Or` visitor non recursive if
`Or` nodes are adjoined, as a result, "stack level too deep" is no
longer raised.
```ruby
class Post < ActiveRecord::Base
end
posts = (0..500).map { |i| Post.where(id: i) }
Benchmark.ips do |x|
x.report("inject scopes") { posts.inject(&:or).to_sql }
end
```
Before:
```
Warming up --------------------------------------
where with ids 9.000 i/100ms
Calculating -------------------------------------
where with ids 96.126 (± 2.1%) i/s - 486.000 in 5.058960s
```
After:
```
Warming up --------------------------------------
inject scopes 10.000 i/100ms
Calculating -------------------------------------
inject scopes 101.714 (± 2.9%) i/s - 510.000 in 5.018880s
```
Fixes#39032.
If only one Arel node exist, wrapping a node by `And` node is obviously
redundant, make concise Arel ast will improve performance for visiting
the ast (about 20% faster for complex ast case).
```ruby
class Post < ActiveRecord::Base
end
posts = (0..500).map { |i| Post.where(id: i) }
Benchmark.ips do |x|
x.report("inject scopes") { posts.inject { |res, scope| res.or(scope) }.to_sql }
end
```
Before:
```
Warming up --------------------------------------
where with ids 8.000 i/100ms
Calculating -------------------------------------
where with ids 80.416 (± 2.5%) i/s - 408.000 in 5.078118s
```
After:
```
Warming up --------------------------------------
where with ids 9.000 i/100ms
Calculating -------------------------------------
where with ids 96.126 (± 2.1%) i/s - 486.000 in 5.058960s
```
Currently, `count` and `average` always returns numeric value, but
`sum`, `maximum`, and `minimum` not always return numeric value if
aggregated on custom attribute type.
I think that inconsistent behavior is surprising:
```ruby
# All adapters except postgresql adapter are affected
# by custom type casting.
Book.group(:status).sum(:status)
# => { "proposed" => "proposed", "published" => nil }
```
That is caused by fallback looking up cast type to `type_for(column)`.
Now all supported adapters can return numeric value without that
fallback, so I think we can remove that, it will also fix aggregate
functions to return numeric value consistently.
While not a particularly good idea, it's possible to use `object_id` as
an attribute name, typically by defining a polymorphic association named
`object`. Since 718a32ca745672a977a0d4ae401f61f439767405, transactional
callbacks deduplicate records by their `object_id`, but this causes
incorrect behaviour when the record has an attribute with that name.
Using `__id__` instead makes a naming collision much less likely.
This PR allows for passing `if_exists` options to the `remove_index`
method so that we can ignore already removed indexes. This work follows
column `if/if_not_exists` from #38352 and `:if_not_exists` on `add_index`
from #38555.
We've found this useful at GitHub, there are migrations where we don't
want to raise if an index was already removed. This will allow us to
remove a monkey patch on `remove_index`.
I considered raising after the `index_name_for_remove` method is called
but that method will raise if the index doesn't exist before we get to
execute. I have a commit that refactors this but after much
consideration this change is cleaner and more straightforward than other
ways of implementing this.
This change also adds a little extra validation to the `add_index` test.
Fix `nodoc` on edited methods.
This removes ibm_db, informix, mssql, oracle, and oracle12 Arel visitors
which are not used in the code base.
Actually oracle and oracle12 visitors are used at oracle-enhanced
adapter, but now I think that those visitors should be in the adapter's
repo like sqlserver adapter and the dedicated Arel visitor
(https://github.com/rails-sqlserver/activerecord-sqlserver-adapter/blob/master/lib/arel/visitors/sqlserver.rb),
otherwise it is hard to find a bug and review PRs for the oracle
visitors (e.g. #35838, #37646), since we don't have knowledge and
environment enough for Oracle.
Previously, if `build_association` was called multiple times for a `has_one` association but never committed to the database, the first newly-associated record would trigger `touch` during the attempted removal of the record.
For example:
class Post < ActiveRecord::Base
has_one :comment, inverse_of: :post, dependent: :destroy
end
class Comment < ActiveRecord::Base
belongs_to :post, inverse_of: :comment, touch: true
end
post = Post.create!
comment_1 = post.build_comment
comment_2 = post.build_comment
When `comment_2` is initialized, the `has_one` would attempt to destroy `comment_1`, triggering a `touch` on `post` from an association record that hasn't been committed to the database.
This removes the attempt to delete an associated `has_one` unless it’s persisted.
Related #15137.
Firebird related code is already removed in #15137.
We have two `current_adapter?(:DB2Adapter)` in tests, but the adapter is
no longer maintained (last release is November 15, 2012).
https://rubygems.org/gems/db2
Yet another (latest) DB2 adapter (`IBM_DBAdapter`) might support Rails
5.0.7, but apparently do not work for Rails 5.2.
https://rubygems.org/gems/ibm_db
We have few lines mention about DB2 in the doc, but now there is no
worth for almost all current users.
#37523 has a regression that ignore extra scoping in callbacks when
create on association relation.
It should respect `klass.current_scope` even when create on association
relation to allow extra scoping in callbacks.
Fixes#38741.
This module was added in 16ae3db5a5c6a08383b974ae6c96faac5b4a3c81 to
allow `ActiveRecord::AttributeMethods::Dirty` to define callbacks and
still have its `_update_record` method wrapped by the version defined in
`ActiveRecord::Callbacks`, so that updates in `before_update` callbacks
are taken into account for partial writes.
The callbacks that created this circular dependency were removed in
34f075fe5666dcf924606f8af2537b83b7b5139f, so we can move the callback
definitions back to the `Callbacks` module.
* Fix EagerLoadPolyAssocsTest setup
* EagerLoadPolyAssocsTest includes a Remembered module in multiple test
ActiveRecord classes. The module is supposed to keep track of records
created for each of the included classes individually, but it collects all
records for every class. This happens because @@remembered is defined on the
Remembered module and shared between the ActiveRecord classes. This only
becomes an issue for databases (like CockroachDB) that use random primary
keys instead of sequential ones by default.
* To fix the bug, we can make the remembered collection name unique per
ActiveRecord class.
* Update EagerLoadPolyAssocsTest test setup
* Instead of defining remembered as a class variable, we can define it as an
instance variable that will be unique to every class that includes the
Remembered module.
When we try to create a table which already exists which also adds
indexes, then the `if_not_exists` option passed to `create_table` is
not extended to indexes. So the migration results into an error if the
table and indexes already exist.
This change extends the `if_not_exists` support to `add_index` so that
if the migration tries to create a table which also has existing
indexes, error won't be raised.
Also as a side-effect individual `add_index` calls will also accept
`if_not_exists` option and respect it henceforth.
[Prathamesh Sonpatki, Justin George]
This reverts commit f265e0ddb1139a91635b7905aae1be76b22c6db1, reversing
changes made to 08dfa9212df4a6bf332a4c49b7e8a7d876a69331.
Reverted due to surprising behavior for applications. We need to
deprecate this behavior first instead of raising by default.
fix insert_all enum test
fix rubocop, change test to double quotes
Update activerecord/test/cases/insert_all_test.rb
fix inser_all_enum_values test: double quotes and order relation before pluck
Co-Authored-By: Ryuta Kamizono <kamipo@gmail.com>
change insert_all_enum_values test to not skip duplicates so it works across adapters
If a transaction is wrapped in a Timeout.timeout(duration) block, then the
transaction will be committed when the transaction block is exited from the
timeout, since it uses `throw`. Ruby code doesn't have a way to distinguish
between a block being exited from a `return`, `break` or `throw`, so
fixing this problem for the case of `throw` would require a backwards
incompatible change for block exited with `return` or `break`. As such,
the current behaviour so it can be changed in the future.
This behaviour is in
rails/activerecord/lib/active_record/enum.rb #serialize(value) line no 143
if value is not present in mapping we are sending the value back ,
which in mysql returns unrelated record.
I have changed to return nil is value is not present in mapping
Implemented code review changes
Improved test case coverage
[ci skip] - cosmetic changes for better readibility of change log
Signed-off-by: ak <atulkanswal@gmail.com>
When the `colorize_logging` is disabled,
logs do not colorize the SQL queries.
But the `sql_color` method is always
invoked which due to regex matching results
in slow queries.
This PR fixes#38685 and removes
unnecessary invokation of `sql_color`
method when `colorize_logging` is disabled
This method was jumping through extra hoops to find the name of the
class the connection is stored on when we can get it from the connection
itself. Since we already have the connection we don't need to loop through the
pools.
In addition, this was using the wrong class name. The class name for the
schema migration should come from the connection owner class, not from
the `db_config.name`. In this case, `db_config.name` is the name of the
configuration in the database.yml. Rails uses the class name to lookup
connections, not the db config name, so we should be consistent here.
While working on this I noticed that we were generating an extra schema
migration class for `ActiveRecord::Base`. Since `ActiveRecord::Base` can
and should use the default and we don't want to create a new one for
single db applications, we should skip creating this if the spec name is
`ActiveRecord::Base`. I added an additional test that ensures the class
generation is correct.
We don't actually need this since the only reason it exists is to pass
the owning class name down to the `handler`. This removes a level of
indirection and an unnecessary accessor on db_config. db_config
shouldn't have to know what class owns it, so we can just remove this
and pass it to the handler.
The Symbol case is needed to preserve current behavior. This doesn't
need a changelog because it's changing un-released behavior.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
Instead of doing a case statement here we can have each of the objects
respond to `invert`. This means that when adding new objects we don't
need to increase this case statement, it's more object oriented, and
let's be fair, it looks better too.
Aaron and I stumbled upon this while working on some performance
work in Arel.
I removed `random_object` from the invert test because we don't support
random objects. If you pass a random object to Arel, it should raise,
not be inverted.
Co-authored-by: Aaron Patterson <aaron.patterson@gmail.com>
After running `bundle exec rake test:sqlite3` and `bundle exec rake test:sqlite3_mem`
on my VM I noticed that it had created untracked files:
```bash
vagrant@ubuntu-bionic:/rails/activerecord$ git status
Untracked files:
(use "git add <file>..." to include in what will be committed)
db/
file::memory:
```
To prevent them from being accidentally committed I put 'file::memory:' to
`activerecord/db/` folder and added the folder to .gitignore
Also, we could consider fixing this by removing `db/` folder in each test that
creates the folder.
It would be great if someone confirms that it happens not only on my VM.
Current code expect an `eq` node has one arel attribute at least, but an
`eq` node may have no arel attribute (e.g. `Arel.sql("...").eq(...)`).
In that case `unscope` will raise `NoMethodError`:
```
% bundle exec ruby -w -Itest test/cases/relations_test.rb -n test_unscope_with_arel_sql
Using sqlite3
Run options: -n test_unscope_with_arel_sql --seed 4477
# Running:
E
Error:
RelationTest#test_unscope_with_arel_sql:
NoMethodError: undefined method `name' for #<Arel::Nodes::Quoted:0x00007f9938e55960>
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:157:in `block (2 levels) in except_predicates'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/arel.rb:57:in `fetch_attribute'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:157:in `block in except_predicates'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:156:in `reject'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:156:in `except_predicates'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/where_clause.rb:31:in `except'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:487:in `block (2 levels) in unscope!'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:481:in `each'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:481:in `block in unscope!'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:471:in `each'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:471:in `unscope!'
/Users/kamipo/src/github.com/rails/rails/activerecord/lib/active_record/relation/query_methods.rb:464:in `unscope'
test/cases/relations_test.rb:2062:in `test_unscope_with_arel_sql'
```
We should check for both `value.left` and `value.right` those are arel
attribute or not.
Behavior has not changed here but the previous API could be
misleading to people who thought it would switch connections for only
that class. `connected_to` switches the context from which we are
getting connections, not the connections themselves.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
This adds gzip support for both the YAML and the Marshal serialization
strategies.
Particularly large schema caches can become a problem when deploying to
Kubernetes, as there is currently a 1*1024*1024 byte limit for the
ConfigMap. For large databases, the schema cache can exceed this limit.
Applications can now connect to multiple shards and switch between
their shards in an application. Note that the shard swapping is
still a manual process as this change does not include an API for
automatic shard swapping.
Usage:
Given the following configuration:
```yaml
production:
primary:
database: my_database
primary_shard_one:
database: my_database_shard_one
```
Connect to multiple shards:
```ruby
class ApplicationRecord < ActiveRecord::Base
self.abstract_class = true
connects_to shards: {
default: { writing: :primary },
shard_one: { writing: :primary_shard_one }
}
```
Swap between shards in your controller / model code:
```ruby
ActiveRecord::Base.connected_to(shard: :shard_one) do
# Read from shard one
end
```
The horizontal sharding API also supports read replicas. See
guides for more details.
This PR also moves some no-doc'd methods into the private namespace as
they were unnecessarily public. We've updated some error messages and
documentation.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
I have so. many. regrets. about using `spec_name` for database
configurations and now I'm finally putting this mistake to an end.
Back when I started multi-db work I assumed that eventually
`connection_specification_name` (sometimes called `spec_name`) and
`spec_name` for configurations would one day be the same thing. After
2 years I no longer believe they will ever be the same thing.
This PR deprecates `spec_name` on database configurations in favor of
`name`. It's the same behavior, just a better name, or at least a
less confusing name.
`connection_specification_name` refers to the parent class name (ie
ActiveRecord::Base, AnimalsBase, etc) that holds the connection for it's
models. In some places like ConnectionHandler it shortens this to
`spec_name`, hence the major confusion.
Recently I've been working with some new folks on database stuff and
connection management and realize how confusing it was to explain that
`db_config.spec_name` was not `spec_name` and
`connection_specification_name`. Worse than that one is a symbole while
the other is a class name. This was made even more complicated by the
fact that `ActiveRecord::Base` used `primary` as the
`connection_specification_name` until #38190.
After spending 2 years with connection management I don't believe that
we can ever use the symbols from the database configs as a way to
connect the database without the class name being _somewhere_ because
a db_config does not know who it's owner class is until it's been
connected and a model has no idea what db_config belongs to it until
it's connected. The model is the only way to tie a primary/writer config
to a replica/reader config. This could change in the future but I don't
see value in adding a class name to the db_configs before connection or
telling a model what config belongs to it before connection. That would
probably break a lot of application assumptions. If we do ever end up in
that world, we can use name, because tbh `spec_name` and
`connection_specification_name` were always confusing to me.
Followup to 88fe76e69328d38942130e16fb65f4aa1b5d1a6b.
These are new in RuboCop 0.80.0, and enforce a style we already prefer
for performance reasons (see df81f2e5f5df46c9c1db27530bbd301b6e23c4a7).
Add `#strict_loading` to any record to prevent lazy loading of associations.
`strict_loading` will cascade down from the parent record to all the
associations to help you catch any places where you may want to use
`preload` instead of lazy loading. This is useful for preventing N+1's.
Co-authored-by: Aaron Patterson <aaron.patterson@gmail.com>
Before df186bd16f0d4a798e626297277fc6b490c1419e, `assign_attributes` and
`attributes=` were both defined in Active Model and both made a copy of
their argument. Now `assign_attributes` is overridden in Active Record
and the copy happens there instead, but `attributes=` isn't overridden.
This meant that assigning nested or multi-parameters via `attributes=`
would mutate the argument, which the copying was meant to prevent.
This issue is caused due to association queries uses newly created fresh
onetime predicate builder, it doesn't realize registered predicate
handler. To fix the issue, dup the predicate builder for the klass
instead of newly creating.
Fixes#38239.
Rails has a number of places where a YAML configuration file is read,
then ERB is evaluated and finally the YAML is parsed.
This consolidates that into one common class.
Co-authored-by: Kasper Timm Hansen <kaspth@gmail.com>
- When a user passes `updated_at` to `upsert_all`, the given value is used.
- When a user omits `updated_at`, `upsert_all` touches the timestamp if (but only if) any upserted values differ.
Preserve Rails' ability to generate intelligent cache keys for ActiveRecord when using `upsert_all` frequently to sync imported data.
indexes in a table.
Currently the pg_class catalog is filtered out to retrieve the indexes in a
table by its relkind value. Which in versions lower than 11 of PostgreSQL
is always `i` (lower case). But since version 11, PostgreSQL
supports partitioned indexes referenced with a relkind value of `I`
(upper case). This makes any feature within the current code base to exclude those
partitioned indexes.
The solution proposed is to make use of the `IN` clause to filter those
relkind values of `i` and/or `I` when retrieving a table indexes.
This PR adds support for `if_exists` on `remove_column` and
`if_not_exists` on `add_column` to support silently ignoring migrations
if the remove tries to remove a non-existent column or an add tries to
add an already existing column.
We (GitHub) have custom monkey-patched support for these features and
would like to upstream this behavior.
This matches the same behavior that is supported for `create_table` and
`drop_table`. The behavior for sqlite is different from mysql/postgres
and sqlite for remove column and that is reflected in the tests.