Follow up to #40351 and #41418.
This fixes `average` on decimal and enum (on integer in general)
attributes to not do `type.deserialize`.
The precision and the scale on the column might be lower than the
calculated result.
And also, mapping the calculated result to enum label is quite
meaningless, the mapping result is almost nil.
Follow-up to #41404.
These tests will prevent regressions if we decide to change the
implementations of `maximum` or `minimum` in the future (for example,
calling `max_by` or `min_by` followed by `send`).
I've been reported a problem by @osyo-manga, preloading queries in
`unprepared_statement` doesn't work as expected in main branch (queries
are executed as prepared statements).
It is caused by #41385, due to `scope.to_sql` in `grouping_key` depends
on `unprepared_statement` which has an issue when nesting.
To fix the issue, don't add/delete object_id in the prepared statements
disabled cache if that is already disabled.
The real problem behind the previous implementation of average
aggregation was not that float columns returned `BigDecimal` but that
average skipped `ActiveModel` type casting.
This change introduces handling for the only needed special case
of average: integers. Now any fields based on
`ActiveRecord::Type::Integer` will be casted to `BigDecimal` when
aggregated with average.
- skips optimised exists? query for relations that have a
having clause
Relations that have aliased select values AND a having clause that
references an aliased select value would generate an error when
#include? was called, due to an optimisation that would generate
call #exists? on the relation instead, which effectively alters
the select values of the query (and thus removes the aliased select
values), but leaves the having clause intact. Because the having
clause is then referencing an aliased column that is no longer
present in the simplified query, an ActiveRecord::InvalidStatement
error was raised.
An sample query affected by this problem:
Author.select('COUNT(*) as total_posts', 'authors.*')
.joins(:posts)
.group(:id)
.having('total_posts > 2')
.include?(Author.first)
This change adds an addition check to the condition that skips the
simplified #exists? query, which simply checks for the presence of
a having clause.
mysql2 knows how to handle Time and Date objects but doesn't know
about TimeWithZone. This was causing failures when prepared statements
were enabled since we were passing TimeWithZone objects that mysql2 didn't
know how to deal with.
An new environment variable to enable prepared statements were added to
config.example.yml, so we can test in our CI and prevent regressions.
Fixes#41368.
Similar to https://github.com/rails/rails/pull/40720
and https://github.com/rails/rails/pull/40805 this change allows for the
`scoping` method to apply to all queries in the block. Previously this
would only apply to queries on the class and not the instance. Ie
`Post.create`, Post.all`, but not `post.update`, or `post.delete`.
The change here will create a global scope that is applied to all
queries for a relation for the duration of the block.
Benefits:
This change allows applications to add a scope to any query for the
duration of a block. This is useful for applications using sharding to
be able to control the query without requiring a `default_scope`. This
is useful if you want to have more control over when a `scoping` is used
on a relation. This also brings `scoping` in parity with the behavior of
`default_scope` so there are less surprises between the behavior of
these two methods.
There are a caveats to this behavior:
1) The `scoping` only applies to objects of the same type. IE you cannot
scope `Post.where(blog_id: 1).scoping` and then expect `post.comments`
will apply `blog_id = 1` to the `Comment` query. This is not possible
because the scope is `posts.blog_id = 1` and we can't apply the `posts`
scope to a `comments` query. To solve this, scopes must be nested.
2) If a block is scoped to `all_queries` it cannot be unscoped without
exiting the block. I couldn't find a way around this but ActiveRecord
scoping is a bit complex and turning off `all_queries` when it's already
on in nested scoping blocks had interesting behavior that I decided was
best left out.
Check if `query` is different from `attributes` before recursively calling `expand_from_hash`.
Updated cases to account for new company and comment records.
This change is extracted from #39547 which we never finished. This minor
refactoring makes a new `empty` method to clean up places where we want
to specifically return an empty result.
Co-authored-by: Aaron Patterson tenderlove@ruby-lang.org
The example uses `fetch` to retrieve HTML from the server and append it to an element on the page. This commit updates the example to append the response HTML rather than the full response object.
If the next query or the prepared statement itself get interrupted, this prevents the database session getting stuck perpetually retrying to recreate the same prepared statement.
Carefully crafted input can cause a DoS via the regular expressions used
for validating the money format in the PostgreSQL adapter. This patch
fixes the regexp.
Thanks to @dee-see from Hackerone for this patch!
[CVE-2021-22880]
Found this issue while working on https://github.com/rails/rails/pull/41084
Currently if you have a `timestamp with time zone` column, and you run `rake db:schema:dump`, it will be output as a `t.datetime`. This is wrong because when you run `rake db:schema:load`, the column will be recreated as a `timestamp without time zone`.
The fix is to add a new column type, `t.timestamptz`. This behaves exactly the same as before, the only change is what native type it is converted to during schema loads.