The implementation of `attribute_method?` on Active Record requires
establishing a database connection and querying the schema. As a general
rule, we don't want to require database connections for any class macro,
as the class should be able to be loaded without a database (e.g. for
things like compiling assets).
Instead of eagerly defining these methods, we do it lazily the first
time they are accessed via `method_missing`. This should not cause any
performance hits, as it will only hit `method_missing` once for the
entire class.
Timestamp column can have less precision than ruby timestamp
In result in how big a fraction of a second can be stored in the
database.
m = Model.create!
m.created_at.usec == m.reload.created_at.usec
# => false
# due to different seconds precision in Time.now and database column
If the precision is low enough, (mysql default is 0, so it is always low
enough by default) the value changes when model is reloaded from the
database. This patch fixes that issue ensuring that any timestamp
assigned as an attribute is converted to column precision under the
attribute.
These new methods are used from the Active Record model layer to
determine which relations are viable to back a model. These new methods
allow us to change `conn.tables` in the future to only return tables and
no views. Same for `conn.table_exists?`.
The goal is to provide the following introspection methods on the
connection:
* `tables`
* `table_exists?`
* `views`
* `view_exists?`
* `data_sources` (views + tables)
* `data_source_exists?` (views + tables)
In the case of using `unsigned` as the type:
create_table :foos do |t|
t.unsigned_integer :unsigned_integer
t.unsigned_bigint :unsigned_bigint
t.unsigned_float :unsigned_float
t.unsigned_decimal :unsigned_decimal, precision: 10, scale: 2
end
Currently, `set_fixture_class` is only available using the
`TestFixtures` concern and it is ignored for `rake db:fixtures:load`.
Using the correct model class, it is possible for the fixture load
to also load the associations from the YAML files (e.g., `:belongs_to`
and `:has_many`).
Closes#21418.
Previously schema names were not quoted. This leads to issues when a
schema names contains a ".". Methods in `schema_statements.rb` should
quote user input.
Since after 87d1aba3c `dependent: :destroy` callbacks on has_one
assocations run *after* destroy, it is possible that a nullification is
attempted on an already destroyed target:
class Car < ActiveRecord::Base
has_one :engine, dependent: :nullify
end
class Engine < ActiveRecord::Base
belongs_to :car, dependent: :destroy
end
> car = Car.create!
> engine = Engine.create!(car: car)
> engine.destroy! # => ActiveRecord::ActiveRecordError: cannot update a
> destroyed record
In the above case, `engine.destroy!` deletes `engine` and *then* triggers the
deletion of `car`, which in turn triggers a nullification of `engine.car_id`.
However, `engine` is already destroyed at that point.
Fixes#21223.
Closes#21304.
While we can validate uniqueness for record without primary key on
creation, there is no way to exclude the current record when
updating. (The update itself will need a primary key to work correctly).
`in_batches` yields Relation objects if a block is given, otherwise it
returns an instance of `BatchEnumerator`. The existing `find_each` and
`find_in_batches` methods work with batches of records. The new API
allows working with relation batches as well.
Examples:
Person.in_batches.each_record(&:party_all_night!)
Person.in_batches.update_all(awesome: true)
Person.in_batches.delete_all
Person.in_batches.map do |relation|
relation.delete_all
sleep 10 # Throttles the delete queries
end
- Added run_cmd() class method to dry up Kernel.system() messages within
this namespace and avoid shell expansion by passing a list of
arguments instead of a string
- Update structure_dump, structure_load, and related tests units to
pass a list of params instead of using a string to
avoid shell expansion
Deep down in the association internals, we're calling `destroy!` rather
than `destroy` when handling things like `dependent` or autosave
association callbacks. Unfortunately, due to the structure of the code
(e.g. it uses callbacks for everything), it's nearly impossible to pass
whether to call `destroy` or `destroy!` down to where we actually need
it.
As such, we have to do some legwork to handle this. Since the callbacks
are what actually raise the exception, we need to rescue it in
`ActiveRecord::Callbacks`, rather than `ActiveRecord::Persistence` where
it matters. (As an aside, if this code wasn't so callback heavy, it
would handling this would likely be as simple as changing `destroy` to
call `destroy!` instead of the other way around).
Since we don't want to lose the exception when `destroy!` is called (in
particular, we don't want the value of the `record` field to change to
the parent class), we have to do some additional legwork to hold onto it
where we can use it.
Again, all of this is ugly and there is definitely a better way to do
this. However, barring a much more significant re-architecting for what
I consider to be a reletively minor improvement, I'm willing to take
this small hit to the flow of this code (begrudgingly).
This makes it more clear that they are reserved keywords and also it
seems less redundant as the line already starts with the call to the
`enum` method.
ActiveRecord::RecordNotFound modified to store model name, primary_key
and id of the caller model. It allows the catcher of this exception to make
a better decision to what to do with it. For example consider this simple
example:
class SomeAbstractController < ActionController::Base
rescue_from ActiveRecord::RecordNotFound, with: :redirect_to_404
private def redirect_to_404(e)
return redirect_to(posts_url) if e.model == 'Post'
raise
end
end
Previously `has_one` and `has_many` associations were using the
`one` and `many` keys respectively. Both of these keys have special
meaning in I18n (they are considered to be pluralizations) so by
renaming them to `has_one` and `has_many` we make the messages more
explicit and most importantly they don't clash with linguistical
systems that need to validate translation keys (and their
pluralizations).
The `:'restrict_dependent_destroy.one'` key should be replaced with
`:'restrict_dependent_destroy.has_one'`, and
`:'restrict_dependent_destroy.many'` with
`:'restrict_dependent_destroy.has_many'`.
[Roque Pinel & Christopher Dell]
This clears the transaction record state when the transaction finishes
with a `:committed` status.
Considering the following example where `name` is a required attribute.
Before we had `new_record?` returning `true` for a persisted record:
```ruby
author = Author.create! name: 'foo'
author.name = nil
author.save # => false
author.new_record? # => true
```
As per the docs, `mark_for_destruction` should do nothing if `autosave`
is not set to true. We normally persist associations on a record no
matter what if the record is a new record, but we were always skipping
records which were `marked_for_destruction?`.
Fixes#20882
Also removes a false positive test that depends on the fixed bug:
At this time, counter_cache does not work with polymorphic relationships
(which is a bug). The test was added to make sure that no
StaleObjectError is raised when the car is destroyed. No such error is
currently raised because the lock version is not incremented by
appending a wheel to the car.
Furthermore, `assert_difference` succeeds because `car.wheels.count`
does not check the counter cache, but the collection size. The test will
fail if it is replaced with `car.wheels_count || 0`.
This code is so fucked. Things that cause this bug not to replicate:
- Defining the validation before the association (we end up calling
`uniq!` on the errors in the autosave validation)
- Adding `accepts_nested_attributes_for` (I have no clue why. The only
thing it does that should affect this is adds `autosave: true` to the
inverse reflection, and doing that manually doesn't fix this).
This solution is a hack, and I'm almost certain there's a better way to
go about it, but this shouldn't cause a huge hit on validation times,
and is the simplest way to get it done.
Fixes#20874.
Since nested hashes are also instances of
`ActionController::Parameters`, and we're explicitly looking to work
with a hash for nested attributes, this caused breakage in several
points.
This is the minimum viable fix for the issue (and one that I'm not
terribly fond of). I can't think of a better place to handle this at the
moment. I'd prefer to use some sort of solution that doesn't special
case AC::Parameters, but we can't use something like `to_h` or `to_a`
since `Enumerable` adds both.
While I've added a trivial test case for verifying this fix in
isolation, we really need better integration coverage to prevent
regressions like this in the future. We don't actually have a lot of
great places for integration coverage at the moment, so I'm deferring it
for now.
Fixes#20922.
This is to simplify the association API, as you can call `reload` on the
association proxy or the parent object to get the same result.
For collection association, you can call `#reload` on association proxy
to force a reload:
@user.posts.reload # Instead of @user.posts(true)
For singular association, you can call `#reload` on the parent object to
clear its association cache then call the association method:
@user.reload.profile # Instead of @user.profile(true)
Passing a truthy argument to force association to reload will be removed
in Rails 5.1.
The concurrent-ruby gem is a toolset containing many concurrency
utilities. Many of these utilities include runtime-specific
optimizations when possible. Rather than clutter the Rails codebase with
concurrency utilities separate from the core task, such tools can be
superseded by similar tools in the more specialized gem. This commit
replaces `ActiveSupport::Concurrency::Latch` with
`Concurrent::CountDownLatch`, which is functionally equivalent.