Numericality validations for aliased attributes are not able
to get the value of the attribute before type cast because
activerecord was trying to get the value of the attribute based
on attribute alias name and not the original attribute name.
Example of validation which would pass even if a invalid value
would be provided
class MyModel < ActiveRecord::Base
validates :aliased_balance, numericality: { greater_than_or_equal_to: 0 }
end
If we instantiate MyModel like bellow it will be valid because
when numericality validation runs it will not be able to get the
value before type cast, so it uses the type casted value
which will be `0.0` and the validation will match.
subject = MyModel.new(aliased_balance: "abcd")
subject.valid?
But if we declare MyModel like this
class MyModel < ActiveRecord::Base
validates :balance, numericality: { greater_than_or_equal_to: 0 }
end
and assign "abcd" value to `balance` when the validations
run the model will be invalid because activerecord will be able
to get the value before type cast.
With this change `read_attribute_before_type_cast` will be able to
get the value before type cast even when the attr_name is an
attribute_alias.
Serialized attributes stored in BLOB columns will be loaded
with the `ASCII-8BIT` (AKA BINARY) encoding.
So unless the serialized payload is pure ASCII, they need
to have the same internal encoding to be properly compared.
Since the serializer have no way to know how the string will
be stored, it's up to the column type to properly set the
encoding.
Random failures due to active connection checking within the
assert_no_changes block.
Failure:
MigrationTest#test_with_advisory_lock_closes_connection [/rails/activerecord/test/cases/migration_test.rb:947]:
--- expected
+++ actual
@@ -1 +1 @@
-["SELECT 1", "SELECT 1"]
+["SELECT 1", "SELECT 1", "SELECT 1"]
This commit scopes the query down further to ensure that the advisory
unlock query is not left in pg_stat_activity which is an indication that
the connection pool was not disconnected.
The previous return value of nil was undocumented and inconsistent with
the non-batched versions of these methods.
Also lean on `each` to create the batches, and add API documentation for
`update_all`, `delete_all`, and `destroy_all` on `BatchEnumerator`.
Perform an efficient existence query through #exists? instead of loading the entire relation into memory and searching it.
Before:
> Person.where(name: "David").include?(david)
SELECT `people`.* FROM `people` WHERE `people`.`name` = 'David'
=> true
After:
> Person.where(name: "David").include?(david)
SELECT 1 AS one FROM `people` WHERE `people`.`name` = 'David' AND `people`.`id` = 1 LIMIT 1
=> true
Add support for PostgreSQL `interval` data type with conversion to
`ActiveSupport::Duration` when loading records from database and
serialization to ISO 8601 formatted duration string on save.
Add support to define a column in migrations and get it in a schema dump.
Optional column precision is supported.
To use this in 6.1, you need to place the next string to your model file:
attribute :duration, :interval
To keep old behavior until 6.2 is released:
attribute :duration, :string
The test is flaky and I can't reproduce it locally. Using the count does
not provide any useful information on failure so we select the queries
instead.