Merge pull request #34 from SimLeek/lens

Lens
This commit is contained in:
2019-11-10 10:41:22 -07:00
committed by GitHub
142 changed files with 22625 additions and 910 deletions
+31
View File
@@ -0,0 +1,31 @@
name: Python package
on: [push]
jobs:
build-and-publish:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v1
- name: Set up Python 3.7
uses: actions/setup-python@v1
with:
python-version: 3.7
- uses: dschep/install-poetry-action@v1.2
- name: Install dependencies
run: poetry install
- name: Build with Poetry
run: poetry build
- name: Publish distribution 📦 to Test PyPI
uses: pypa/gh-action-pypi-publish@master
with:
password: ${{ secrets.test_pypi_password }}
repository_url: https://test.pypi.org/legacy/
continue-on-error: true
- name: Publish distribution 📦 to PyPI
if: startsWith(github.event.ref, 'refs/tags')
uses: pypa/gh-action-pypi-publish@master
with:
password: ${{ secrets.pypi_password }}
+25
View File
@@ -0,0 +1,25 @@
name: Python package
on: [push]
jobs:
build-and-publish:
runs-on: windows-latest
strategy:
fail-fast: false
max-parallel: 4
matrix:
python-version: [3.6, 3.7]
steps:
- uses: actions/checkout@v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- uses: dschep/install-poetry-action@v1.2
- name: Install dependencies
run: poetry install
- name: Test with tox
run: poetry run tox -p auto -o
+1 -1
View File
@@ -66,7 +66,7 @@ instance/
.scrapy
# Sphinx documentation
docs/_build/
docs/docsrc/_build/
# PyBuilder
target/
-23
View File
@@ -1,23 +0,0 @@
language: python
dist: xenial
sudo: true
cache: pip
python:
- '3.5'
- '3.6'
- '3.7'
install:
- pip install -r requirements.txt
script:
- if [[ $TRAVIS_PYTHON_VERSION == 3.5 ]]; then export CIBW_BUILD='cp35*'; fi
- if [[ $TRAVIS_PYTHON_VERSION == 3.6 ]]; then export CIBW_BUILD='cp36*'; fi
- if [[ $TRAVIS_PYTHON_VERSION == 3.7 ]]; then export CIBW_BUILD='cp37*'; fi
- pip install cibuildwheel==0.10.1
- cibuildwheel --output-dir wheelhouse
deploy:
provider: pypi
user: SimLeek
skip_existing: true
password:
secure: Iyg4UilFJrpZeOyRCdyVmUT76qZzkmtPGoundyjgKIjmXharVgdpN5G1v/7NRCjifxk9vnI5QGFUcd88LpnyTg5tbLm7gttiyV4cue/UPXc8LJEu1vhc3y6fjT26pr3slEFQNuhYWCEx1HxMilizDURqxOIrQeOnYAe8UBuYrwbs6lyVLE18ojetRpMeDJ9GDwOtYboizv3TtohR/sv/XbKMpqMVWFdQU8hhahi5KgdMYq2RF+em3L9xraUiVID5AZ6DqtCod5iHbULIoabguB2ykMFCBf5XEzs6vEw4dFpvK/aHG1Z6Mmc5sIm6+Cklq73lgqkKCCTRL5Vjq/lsPQ36J4RREO64+O6pQ052M9n+wXpEo8dvy2YDYE3TvlpOmbVJ5BJrExGgYgjsQ9hslp8GroU4aZbroljQuRz8SpzAXhiHnrB29IYNpkQQk93KZAIQdT7xY5iykhtU0eo/uk1vuB64qHWxxJ05PUqCBaaRwWPHOECKccc+fIH6aRIICeRjvyAo/LHDD8b+fC+HFR6nPHKswi7Mhzl9kFI58nbNR6kdQTSPrEiVdIiOTVTY1kBWIZweyV/vJGh+PUZyyWe61r5PFxU9lXMZH8oY/xvGPlhaUrgLvJ1tV24m1EGJ9dbbeuQ9T6dQYqY28IC7gl9JKbmnTBewQfW/F0T2N5I=
-178
View File
@@ -1,178 +0,0 @@
Copyright 2018 SimLeek
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
-21
View File
@@ -1,21 +0,0 @@
Copyright (c) 2018 SimLeek
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
-5
View File
@@ -1,5 +0,0 @@
include README.md
include README.txt
include LICENSE-APACHE
include LICENSE-MIT
include LICENSE.md
+45 -21
View File
@@ -1,10 +1,15 @@
# displayarray
displayarray
============
A library for displaying arrays as video in Python.
## Display arrays while updating them
Display arrays while updating them
----------------------------------
![](https://i.imgur.com/UEt6iR6.gif)
.. figure:: https://i.imgur.com/UEt6iR6.gif
:alt:
::
from displayarray import display
import numpy as np
@@ -16,12 +21,15 @@ A library for displaying arrays as video in Python.
arr[:] += np.random.normal(0.001, 0.0005, (100, 100, 3))
arr %= 1.0
## Run functions on 60fps webcam or video input
Run functions on 60fps webcam or video input
--------------------------------------------
[![](https://thumbs.gfycat.com/AbsoluteEarnestEelelephant-size_restricted.gif)](https://gfycat.com/absoluteearnesteelelephant)
|image0|
(Video Source: https://www.youtube.com/watch?v=WgXQ59rg0GM)
::
from displayarray import display
import math as m
@@ -34,15 +42,19 @@ A library for displaying arrays as video in Python.
forest_color.i = 0
display("fractal test.mp4", callbacks=forest_color, blocking=True, fps_limit=120)
## Display tensors as they're running through TensorFlow or PyTorch
![](https://i.imgur.com/TejCpIP.png)
Display tensors as they're running through TensorFlow or PyTorch
----------------------------------------------------------------
.. figure:: https://i.imgur.com/TejCpIP.png
:alt:
::
# see test_display_tensorflow in test_simple_apy for full code.
...
autoencoder.compile(loss="mse", optimizer="adam")
while displayer:
@@ -60,10 +72,13 @@ A library for displaying arrays as video in Python.
output_image = autoencoder.predict(grab, steps=1)
displayer.update((output_image[0] * 255.0).astype(np.uint8), "uid for autoencoder output")
## Handle input events
Handle input events
-------------------
Mouse events captured whenever the mouse moves over the window:
::
event:0
x,y:133,387
flags:0
@@ -71,33 +86,42 @@ Mouse events captured whenever the mouse moves over the window:
Code:
::
from displayarray.input import mouse_loop
from displayarray import display
@mouse_loop
def print_mouse_thread(mouse_event):
print(mouse_event)
display("fractal test.mp4", blocking=True)
## Installation
Installation
------------
displayarray is distributed on [PyPI](https://pypi.org) as a universal
wheel in Python 3.6+ and PyPy.
displayarray is distributed on `PyPI <https://pypi.org>`__ as a
universal wheel in Python 3.6+ and PyPy.
::
$ pip install displayarray
## Usage
Usage
-----
See tests for more example code. API will be generated soon.
## License
License
-------
displayarray is distributed under the terms of both
- [MIT License](https://choosealicense.com/licenses/mit)
- [Apache License, Version 2.0](https://choosealicense.com/licenses/apache-2.0)
- `MIT License <https://choosealicense.com/licenses/mit>`__
- `Apache License, Version
2.0 <https://choosealicense.com/licenses/apache-2.0>`__
at your option.
.. |image0| image:: https://thumbs.gfycat.com/AbsoluteEarnestEelelephant-size_restricted.gif
:target: https://gfycat.com/absoluteearnesteelelephant
-1
View File
@@ -1 +0,0 @@
A threaded PubSub OpenCV interfaceREADME.md. Webcam and video feeds to multiple windows is supported.
-2
View File
@@ -1,2 +0,0 @@
# redirection, so we can use subtree like pip
from displayarray import frame_publising, subscriber_window
+3 -1
View File
@@ -6,4 +6,6 @@ display is a function that displays these in their own windows.
__version__ = "0.6.6"
from .subscriber_window.subscriber_windows import display
from .window.subscriber_windows import display, breakpoint_display
from .frame.frame_updater import read_updates
from .frame.frame_publishing import publish_updates_zero_mq, publish_updates_ros
+77
View File
@@ -0,0 +1,77 @@
"""
DisplayArray.
Display NumPy arrays.
Usage:
displayarray (-w <webcam-number> | -v <video-filename> | -t <topic-name>[,dtype])... [-m <msg-backend>]
displayarray -h
displayarray --version
Options:
-h, --help Show this help text.
--version Show version number.
-w <webcam-number>, --webcam=<webcam-number> Display video from a webcam.
-v <video-filename>, --video=<video-filename> Display frames from a video file.
-t <topic-name>, --topic=<topic-name> Display frames from a topic using the chosen message broker.
-m <msg-backend>, --message-backend <msg-backend> Choose message broker backend. [Default: ROS]
Currently supported: ROS, ZeroMQ
--ros Use ROS as the backend message broker.
--zeromq Use ZeroMQ as the backend message broker.
"""
from docopt import docopt
import asyncio
def main(argv=None):
"""Process command line arguments."""
arguments = docopt(__doc__, argv=argv)
if arguments["--version"]:
from displayarray import __version__
print(f"DisplayArray V{__version__}")
return
from displayarray import display
vids = [int(w) for w in arguments["--webcam"]] + arguments["--video"]
v_disps = None
if vids:
v_disps = display(*vids, blocking=False)
from displayarray.frame.frame_updater import read_updates_ros, read_updates_zero_mq
topics = arguments["--topic"]
topics_split = [t.split(",") for t in topics]
d = display()
async def msg_recv():
nonlocal d
while d:
if arguments["--message-backend"] == "ROS":
async for v_name, frame in read_updates_ros(
[t for t, d in topics_split], [d for t, d in topics_split]
):
d.update(arr=frame, id=v_name)
if arguments["--message-backend"] == "ZeroMQ":
async for v_name, frame in read_updates_zero_mq(
*[bytes(t, encoding="ascii") for t in topics]
):
d.update(arr=frame, id=v_name)
async def update_vids():
while v_disps:
if v_disps:
v_disps.update()
await asyncio.sleep(0)
async def runner():
await asyncio.wait([msg_recv(), update_vids()])
loop = asyncio.get_event_loop()
loop.run_until_complete(runner())
loop.close()
if __name__ == "__main__":
main()
@@ -1,3 +1,5 @@
"""Generate unique IDs for videos."""
from collections.abc import Hashable
+16
View File
@@ -0,0 +1,16 @@
"""Functions needed to deal with OpenCV."""
import weakref
class WeakMethod(weakref.WeakMethod):
"""Pass any method to OpenCV without it keeping a reference forever."""
def __call__(self, *args, **kwargs):
"""Call the actual method this object was made with."""
obj = super().__call__()
func = self._func_ref()
if obj is None or func is None:
return None
meth = self._meth_type(func, obj)
meth(*args, **kwargs)
+8 -6
View File
@@ -1,4 +1,6 @@
from displayarray.subscriber_window import window_commands
"""Standard callbacks to use on incoming frames."""
from displayarray.window import window_commands
import numpy as np
from typing import Union
@@ -13,9 +15,9 @@ def global_cv_display_callback(frame: np.ndarray, cam_id: Union[int, str]):
:param cam_id: The video or image source
:type cam_id: Union[int, str]
"""
from displayarray.subscriber_window import SubscriberWindows
from displayarray.window import SubscriberWindows
SubscriberWindows.FRAME_DICT[str(cam_id) + "frame"] = frame
SubscriberWindows.FRAME_DICT[str(cam_id)] = frame
class function_display_callback(object): # NOSONAR
@@ -23,19 +25,19 @@ class function_display_callback(object): # NOSONAR
Used for running arbitrary functions on pixels.
>>> import random
>>> from displayarray.webcam_pub import VideoHandlerThread
>>> from displayarray.frame import FrameUpdater
>>> img = np.zeros((300, 300, 3))
>>> def fun(array, coords, finished):
... r,g,b = random.random()/20.0, random.random()/20.0, random.random()/20.0
... array[coords[0:2]] = (array[coords[0:2]] + [r,g,b])%1.0
>>> VideoHandlerThread(video_source=img, callbacks=function_display_callback(fun)).display()
>>> FrameUpdater(video_source=img, callbacks=function_display_callback(fun)).display()
:param display_function: a function to run on the input image.
:param finish_function: a function to run on the input image when the other function finishes.
"""
def __init__(self, display_function, finish_function=None):
"""Run display_function on frames."""
self.looping = True
self.first_call = True
+1
View File
@@ -0,0 +1 @@
"""Effects to run on numpy arrays to make data clearer."""
+122
View File
@@ -0,0 +1,122 @@
"""Crop any n-dimensional array."""
import numpy as np
from displayarray.input import mouse_loop
class Crop(object):
"""
A callback class that will return the input array cropped to the output size. N-dimensional.
>>> crop_it = Crop((2,2,2))
>>> arr = np.ones((4,4,4))
>>> crop_it(arr)
array([[[1., 1.],
[1., 1.]],
<BLANKLINE>
[[1., 1.],
[1., 1.]]])
"""
def __init__(self, output_size=(64, 64, 3), center=None):
"""Create the cropper."""
self._output_size = None
self._center = None
self.odd_center = None
self.mouse_control = None
self.input_size = None
self.center = center
self.output_size = output_size
@property
def output_size(self):
"""Get the output size after cropping."""
return self._output_size
@output_size.setter
def output_size(self, set):
"""Set what the output size will be after cropping."""
self._output_size = set
if self._output_size is not None:
self._output_size = np.asarray(set)
@property
def center(self):
"""Get center crop position on the input."""
return self._center
@center.setter
def center(self, set):
"""Set center crop position on the input."""
self._center = set
if self._center is not None:
self._center = np.asarray(set)
def __call__(self, arr):
"""Crop the input array to the specified output size. output is centered on self.center point on input."""
self.input_size = arr.shape
if self.center is None:
self.center = [int(arr.shape[x]) // 2 for x in range(arr.ndim)]
self.odd_out = np.array(
[self.output_size[x] % 2 for x in range(len(self.output_size))]
)
self.odd_center = np.array(
[self.center[x] % 2 for x in range(len(self.center))]
)
center = self.center.copy() # stop opencv from thread breaking us
top_left_get = [
min(max(0, center[x] - self.output_size[x] // 2), arr.shape[x] - 1)
for x in range(arr.ndim)
]
bottom_right_get = [
min(
max(0, center[x] + self.output_size[x] // 2 + self.odd_out[x]),
arr.shape[x],
)
for x in range(arr.ndim)
]
top_left_put = [
min(
max(
0,
-(bottom_right_get[x] - center[x] - self.output_size[x] // 2)
+ self.odd_out[x],
),
self.output_size[x],
)
for x in range(arr.ndim)
]
bottom_right_put = [
min(
max(0, top_left_put[x] + (bottom_right_get[x] - top_left_get[x])),
self.output_size[x],
)
for x in range(arr.ndim)
]
get_slices = [slice(x1, x2) for x1, x2 in zip(top_left_get, bottom_right_get)]
get_slices = tuple(get_slices)
put_slices = [slice(x1, x2) for x1, x2 in zip(top_left_put, bottom_right_put)]
put_slices = tuple(put_slices)
out_array = np.zeros(self.output_size)
out_array[put_slices] = arr[get_slices]
return out_array.astype(arr.dtype)
def enable_mouse_control(self):
"""Move the mouse to move where the crop is from on the original image."""
@mouse_loop
def m_loop(me):
if self.center is None:
self.center = [0, 0, 1]
self.center[:] = [
int(me.y / self.output_size[0] * self.input_size[0]),
int(me.x / self.output_size[1] * self.input_size[1]),
1,
]
self.mouse_control = m_loop
return self
+257
View File
@@ -0,0 +1,257 @@
"""Create lens effects. Currently only 2D+color arrays are supported."""
import numpy as np
from displayarray.input import mouse_loop
import cv2
class _ControllableLens(object):
def __init__(self, use_bleed=False, zoom=1, center=None):
self.center = center
self.zoom = zoom
self.use_bleed = use_bleed
self.bleed = None
self.mouse_control = None
def check_setup_bleed(self, arr):
if not isinstance(self.bleed, np.ndarray) and self.use_bleed:
self.bleed = np.zeros_like(arr)
def run_bleed(self, arr, x, y):
arr[y, ...] = (arr[(y + 1) % len(y), ...] + arr[(y - 1) % len(y), ...]) / 2
arr[:, x, ...] = (
arr[:, (x + 1) % len(x), ...] + arr[:, (x - 1) % len(x), ...]
) / 2
class Barrel(_ControllableLens):
"""
Create a barrel distortion.
>>> distort_it = Barrel(zoom=1, barrel_power=1.5)
>>> x = np.linspace(0, 1, 4)
>>> y = np.linspace(0, 1, 4)
>>> c = np.linspace(0, 1, 2)
>>> arrx, arry, arrc = np.meshgrid(x,y,c)
>>> arrx
array([[[0. , 0. ],
[0.33333333, 0.33333333],
[0.66666667, 0.66666667],
[1. , 1. ]],
<BLANKLINE>
[[0. , 0. ],
[0.33333333, 0.33333333],
[0.66666667, 0.66666667],
[1. , 1. ]],
<BLANKLINE>
[[0. , 0. ],
[0.33333333, 0.33333333],
[0.66666667, 0.66666667],
[1. , 1. ]],
<BLANKLINE>
[[0. , 0. ],
[0.33333333, 0.33333333],
[0.66666667, 0.66666667],
[1. , 1. ]]])
>>> distort_it(arrx)
array([[[0.33333333, 0.33333333],
[0.33333333, 0.33333333],
[0.66666667, 0.66666667],
[0.66666667, 0.66666667]],
<BLANKLINE>
[[0.33333333, 0.33333333],
[0.33333333, 0.33333333],
[0.66666667, 0.66666667],
[0.66666667, 0.66666667]],
<BLANKLINE>
[[0.33333333, 0.33333333],
[0.33333333, 0.33333333],
[0.66666667, 0.66666667],
[0.66666667, 0.66666667]],
<BLANKLINE>
[[0.33333333, 0.33333333],
[0.33333333, 0.33333333],
[0.66666667, 0.66666667],
[0.66666667, 0.66666667]]])
:param zoom: How far to zoom into the array
:param barrel_power: How much to distort.
1 = no distortion. >1 increases size of center. 0<x<1 increases peripheral.
:param center: Center to apply the distortion at on the source image.
:param use_bleed: Fill in black regions with previos frame values. Shouldn't be neccesary in most cases.
"""
def __init__(self, zoom=1, barrel_power=1, center=None, use_bleed=False):
"""Create the distorter."""
super().__init__(use_bleed, zoom, center)
self.center = center
self.zoom = zoom
self.use_bleed = use_bleed
self.bleed = None
self.barrel_power = barrel_power
self.mouse_control = None
def enable_mouse_control(self):
"""
Enable mouse control.
Move the mouse to center the image, scroll to increase/decrease barrel, ctrl+scroll to increase/decrease zoom.
"""
@mouse_loop
def m_loop(me):
self.center[:] = [me.y, me.x]
if me.event == cv2.EVENT_MOUSEWHEEL:
if me.flags & cv2.EVENT_FLAG_CTRLKEY:
if me.flags > 0:
self.zoom *= 1.1
else:
self.zoom /= 1.1
else:
if me.flags > 0:
self.barrel_power *= 1.1
else:
self.barrel_power /= 1.1
self.mouse_control = m_loop
return self
def __call__(self, arr):
"""Run the distortion on an array."""
zoom_out = 1.0 / self.zoom
self.check_setup_bleed(arr)
y = np.arange(arr.shape[0])
x = np.arange(arr.shape[1])
if self.center is None:
self.center = [len(y) / 2.0, len(x) / 2.0]
y2_ = (y - (len(y) / 2.0)) * zoom_out / arr.shape[0]
x2_ = (x - (len(x) / 2.0)) * zoom_out / arr.shape[1]
p2 = np.array(np.meshgrid(x2_, y2_))
cy = self.center[0] / arr.shape[0]
cx = self.center[1] / arr.shape[1]
barrel_power = self.barrel_power
theta = np.arctan2(p2[1], p2[0])
radius = np.linalg.norm(p2, axis=0, ord=2)
radius = pow(radius, barrel_power)
x_new = 0.5 * (radius * np.cos(theta) + cx * 2)
x_new = np.clip(x_new * len(x), 0, len(x) - 1)
y_new = 0.5 * (radius * np.sin(theta) + cy * 2)
y_new = np.clip(y_new * len(y), 0, len(y) - 1)
p = np.array(np.meshgrid(y, x)).astype(np.uint32)
p_new = np.array((y_new, x_new))
p_new = p_new.astype(np.uint32)
if self.use_bleed:
arr2 = self.bleed.copy()
self.run_bleed(arr2, x, y)
arr2[p_new[0], p_new[1], :] = np.swapaxes(arr[p[0], p[1], :], 0, 1)
self.bleed = arr2
else:
arr[p[0], p[1], :] = np.swapaxes(arr[p_new[0], p_new[1], :], 0, 1)
return arr
class Mustache(_ControllableLens):
"""Create a mustache distortion."""
def __init__(
self, use_bleed=False, barrel_power=1, pincushion_power=1, zoom=1, center=None
):
"""Create the distorter."""
super().__init__(use_bleed, zoom, center)
self.center = center
self.zoom = zoom
self.use_bleed = use_bleed
self.bleed = None
self.barrel_power = barrel_power
self.pincushion_power = pincushion_power
self.mouse_control = None
def enable_mouse_control(self):
"""
Enable mouse control.
Move the mouse to center the image.
Scroll to increase/decrease barrel.
Ctrl+scroll to increase/decrease zoom.
Shift+Scroll to increase/decrease pincushion.
"""
@mouse_loop
def m_loop(me):
self.center[:] = [me.y, me.x]
if me.event == cv2.EVENT_MOUSEWHEEL:
if me.flags & cv2.EVENT_FLAG_CTRLKEY:
if me.flags > 0:
self.zoom *= 1.1
else:
self.zoom /= 1.1
elif me.flags & cv2.EVENT_FLAG_SHIFTKEY:
if me.flags > 0:
self.pincushion_power *= 1.1
else:
self.pincushion_power /= 1.1
else:
if me.flags > 0:
self.barrel_power *= 1.1
else:
self.barrel_power /= 1.1
self.mouse_control = m_loop
def __call__(self, arr):
"""Run the distortion on an array."""
zoom_out = 1.0 / self.zoom
self.check_setup_bleed(arr)
y = np.arange(arr.shape[0])
x = np.arange(arr.shape[1])
if self.center is None:
self.center = [len(y) / 2.0, len(x) / 2.0]
y2_ = (y - self.center[0]) * zoom_out / arr.shape[0]
x2_ = (x - self.center[1]) * zoom_out / arr.shape[1]
p2 = np.array(np.meshgrid(x2_, y2_))
barrel_power = self.barrel_power
pincushion_power = self.pincushion_power
theta = np.arctan2(p2[1], p2[0])
radius = np.linalg.norm(p2, axis=0)
radius2 = np.linalg.norm(p2, axis=0, ord=4)
radius = pow(radius, barrel_power)
radius2 = pow(radius2, pincushion_power)
x_new = 0.5 * (radius2 * radius * np.cos(theta) + 1)
x_new = np.clip(x_new * len(x), 0, len(x) - 1)
y_new = 0.5 * (radius2 * radius * np.sin(theta) + 1)
y_new = np.clip(y_new * len(y), 0, len(y) - 1)
p = np.array(np.meshgrid(y, x)).astype(np.uint32)
p_new = np.array((y_new, x_new)).astype(np.uint32)
if self.use_bleed:
arr2 = self.bleed.copy()
self.run_bleed(arr2, x, y)
arr2[p_new[0], p_new[1], :] = np.swapaxes(arr[p[0], p[1], :], 0, 1)
self.bleed = arr2
else:
arr2 = np.zeros_like(arr)
arr2[p_new[0], p_new[1], :] = np.swapaxes(arr[p[0], p[1], :], 0, 1)
return arr2
+81
View File
@@ -0,0 +1,81 @@
"""Reduce many color images to the three colors that your eyeballs can see."""
import numpy as np
from ..input import mouse_loop
import cv2
from typing import Iterable
class SelectChannels(object):
"""
Select channels to display from an array with too many colors.
:param selected_channels: the list of channels to display.
"""
def __init__(self, selected_channels: Iterable[int] = None):
"""Select which channels from the input array to display in the output."""
if selected_channels is None:
selected_channels = [0, 0, 0]
self.selected_channels = selected_channels
self.mouse_control = None
self.mouse_print_channels = False
self.num_input_channels = None
def __call__(self, arr):
"""Run the channel selector."""
self.num_input_channels = arr.shape[-1]
out_arr = [
arr[..., min(max(0, x), arr.shape[-1] - 1)] for x in self.selected_channels
]
out_arr = np.stack(out_arr, axis=-1)
return out_arr
def enable_mouse_control(self):
"""
Enable mouse control.
Alt+Scroll to increase/decrease channel 2.
Shift+Scroll to increase/decrease channel 1.
Ctrl+scroll to increase/decrease channel 0.
"""
@mouse_loop
def m_loop(me):
if me.event == cv2.EVENT_MOUSEWHEEL:
if me.flags & cv2.EVENT_FLAG_CTRLKEY:
if me.flags > 0:
self.selected_channels[0] += 1
self.selected_channels[0] = min(
self.selected_channels[0], self.num_input_channels - 1
)
else:
self.selected_channels[0] -= 1
self.selected_channels[0] = max(self.selected_channels[0], 0)
if self.mouse_print_channels:
print(f"Channel 0 now maps to {self.selected_channels[0]}.")
elif me.flags & cv2.EVENT_FLAG_SHIFTKEY:
if me.flags > 0:
self.selected_channels[1] += 1
self.selected_channels[1] = min(
self.selected_channels[1], self.num_input_channels - 1
)
else:
self.selected_channels[1] -= 1
self.selected_channels[1] = max(self.selected_channels[1], 0)
if self.mouse_print_channels:
print(f"Channel 1 now maps to {self.selected_channels[1]}.")
elif me.flags & cv2.EVENT_FLAG_ALTKEY:
if me.flags > 0:
self.selected_channels[2] += 1
self.selected_channels[2] = min(
self.selected_channels[2], self.num_input_channels - 1
)
else:
self.selected_channels[2] -= 1
self.selected_channels[2] = max(self.selected_channels[2], 0)
if self.mouse_print_channels:
print(f"Channel 2 now maps to {self.selected_channels[2]}.")
self.mouse_control = m_loop
@@ -9,7 +9,7 @@ np_cam simulates numpy arrays as OpenCV cameras
"""
from . import subscriber_dictionary
from .frame_update_thread import VideoHandlerThread
from .frame_updater import FrameUpdater, read_updates
from .get_frame_ids import get_cam_ids
from .np_to_opencv import NpCam
from .frame_publishing import pub_cam_thread
+220
View File
@@ -0,0 +1,220 @@
"""Publish frames so any function within this program can find them."""
import threading
import time
import asyncio
import cv2
import numpy as np
from displayarray.frame import subscriber_dictionary
from .np_to_opencv import NpCam
from displayarray._uid import uid_for_source
from typing import Union, Tuple, Optional, Dict, Any, List, Callable
FrameCallable = Callable[[np.ndarray], Optional[np.ndarray]]
def pub_cam_loop(
cam_id: Union[int, str, np.ndarray],
request_size: Tuple[int, int] = (-1, -1),
high_speed: bool = True,
fps_limit: float = 240,
) -> bool:
"""
Publish whichever camera you select to CVCams.<cam_id>.Vid.
You can send a quit command 'quit' to CVCams.<cam_id>.Cmd
Status information, such as failure to open, will be posted to CVCams.<cam_id>.Status
:param high_speed: Selects mjpeg transferring, which most cameras seem to support, so speed isn't limited
:param fps_limit: Limits the frames per second.
:param cam_id: An integer representing which webcam to use, or a string representing a video file.
:param request_size: A tuple with width, then height, to request the video size.
:return: True if loop ended normally, False if it failed somehow.
"""
name = uid_for_source(cam_id)
if isinstance(cam_id, (int, str)):
cam: Union[NpCam, cv2.VideoCapture] = cv2.VideoCapture(cam_id)
elif isinstance(cam_id, np.ndarray):
cam = NpCam(cam_id)
else:
raise TypeError(
"Only strings or ints representing cameras, or numpy arrays representing pictures supported."
)
subscriber_dictionary.register_cam(name)
# cam.set(cv2.CAP_PROP_CONVERT_RGB, 0)
frame_counter = 0
sub = subscriber_dictionary.cam_cmd_sub(name)
sub.return_on_no_data = ""
msg = ""
if high_speed:
cam.set(cv2.CAP_PROP_FOURCC, cv2.CAP_OPENCV_MJPEG)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, request_size[0])
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, request_size[1])
if not cam.isOpened():
subscriber_dictionary.CV_CAMS_DICT[name].status_pub.publish("failed")
return False
now = time.time()
while msg != "quit":
time.sleep(1.0 / (fps_limit - (time.time() - now)))
now = time.time()
(ret, frame) = cam.read() # type: Tuple[bool, np.ndarray ]
if ret is False or not isinstance(frame, np.ndarray):
cam.release()
subscriber_dictionary.CV_CAMS_DICT[name].status_pub.publish("failed")
return False
if cam.get(cv2.CAP_PROP_FRAME_COUNT) > 0:
frame_counter += 1
if frame_counter >= cam.get(cv2.CAP_PROP_FRAME_COUNT):
frame_counter = 0
cam = cv2.VideoCapture(cam_id)
subscriber_dictionary.CV_CAMS_DICT[name].frame_pub.publish(frame)
msg = sub.get()
sub.release()
cam.release()
return True
def pub_cam_thread(
cam_id: Union[int, str],
request_ize: Tuple[int, int] = (-1, -1),
high_speed: bool = True,
fps_limit: float = 240,
) -> threading.Thread:
"""Run pub_cam_loop in a new thread. Starts on creation."""
t = threading.Thread(
target=pub_cam_loop, args=(cam_id, request_ize, high_speed, fps_limit)
)
t.start()
return t
async def publish_updates_zero_mq(
*vids,
callbacks: Optional[
Union[Dict[Any, FrameCallable], List[FrameCallable], FrameCallable]
] = None,
fps_limit=float("inf"),
size=(-1, -1),
end_callback: Callable[[], bool] = lambda: False,
blocking=False,
publishing_address="tcp://127.0.0.1:5600",
prepend_topic="",
flags=0,
copy=True,
track=False
):
"""Publish frames to ZeroMQ when they're updated."""
import zmq
from displayarray import read_updates
ctx = zmq.Context()
s = ctx.socket(zmq.PUB)
s.bind(publishing_address)
if not blocking:
flags |= zmq.NOBLOCK
try:
for v in read_updates(vids, callbacks, fps_limit, size, end_callback, blocking):
if v:
for vid_name, frame in v.items():
md = dict(
dtype=str(frame.dtype),
shape=frame.shape,
name=prepend_topic + vid_name,
)
s.send_json(md, flags | zmq.SNDMORE)
s.send(frame, flags, copy=copy, track=track)
if fps_limit:
await asyncio.sleep(1.0 / fps_limit)
else:
await asyncio.sleep(0)
except KeyboardInterrupt:
pass
finally:
vid_names = [uid_for_source(name) for name in vids]
for v in vid_names:
subscriber_dictionary.stop_cam(v)
async def publish_updates_ros(
*vids,
callbacks: Optional[
Union[Dict[Any, FrameCallable], List[FrameCallable], FrameCallable]
] = None,
fps_limit=float("inf"),
size=(-1, -1),
end_callback: Callable[[], bool] = lambda: False,
blocking=False,
node_name="displayarray",
publisher_name="npy",
rate_hz=None,
dtype=None
):
"""Publish frames to ROS when they're updated."""
import rospy
from rospy.numpy_msg import numpy_msg
import std_msgs.msg
from displayarray import read_updates
def get_msg_type(dtype):
if dtype is None:
msg_type = {
np.float32: std_msgs.msg.Float32(),
np.float64: std_msgs.msg.Float64(),
np.bool: std_msgs.msg.Bool(),
np.char: std_msgs.msg.Char(),
np.int16: std_msgs.msg.Int16(),
np.int32: std_msgs.msg.Int32(),
np.int64: std_msgs.msg.Int64(),
np.str: std_msgs.msg.String(),
np.uint16: std_msgs.msg.UInt16(),
np.uint32: std_msgs.msg.UInt32(),
np.uint64: std_msgs.msg.UInt64(),
np.uint8: std_msgs.msg.UInt8(),
}[dtype]
else:
msg_type = (
dtype
) # allow users to use their own custom messages in numpy arrays
return msg_type
publishers: Dict[str, rospy.Publisher] = {}
rospy.init_node(node_name, anonymous=True)
try:
for v in read_updates(vids, callbacks, fps_limit, size, end_callback, blocking):
if v:
if rospy.is_shutdown():
break
for vid_name, frame in v.items():
if vid_name not in publishers:
dty = frame.dtype if dtype is None else dtype
publishers[vid_name] = rospy.Publisher(
publisher_name + vid_name,
numpy_msg(get_msg_type(dty)),
queue_size=10,
)
publishers[vid_name].publish(frame)
if rate_hz:
await asyncio.sleep(1.0 / rate_hz)
else:
await asyncio.sleep(0)
except KeyboardInterrupt:
pass
finally:
vid_names = [uid_for_source(name) for name in vids]
for v in vid_names:
subscriber_dictionary.stop_cam(v)
if rospy.core.is_shutdown():
raise rospy.exceptions.ROSInterruptException("rospy shutdown")
File diff suppressed because it is too large Load Diff
@@ -1,3 +1,5 @@
"""Get camera IDs."""
import cv2
from typing import List
@@ -1,3 +1,5 @@
"""Allow OpenCV to handle numpy arrays as input."""
import numpy as np
import cv2
@@ -6,6 +8,7 @@ class NpCam(object):
"""Add OpenCV camera controls to a numpy array."""
def __init__(self, img):
"""Create a fake camera for OpenCV based on the initial array."""
assert isinstance(img, np.ndarray)
self.__img = img
self.__is_opened = True
@@ -9,6 +9,7 @@ class CamHandler(object):
"""A camera handler instance that will send commands to and receive data from a camera."""
def __init__(self, name, sub):
"""Create the cam handler."""
self.name = name
self.cmd = None
self.sub: VariableSub = sub
@@ -20,6 +21,7 @@ class Cam(object):
"""A camera publisher instance that will send frames, status, and commands out."""
def __init__(self, name):
"""Create the cam."""
self.name = name
self.cmd = None
self.frame_pub = VariablePub()
@@ -1,94 +0,0 @@
import threading
import time
import cv2
import numpy as np
from displayarray.frame_publising import subscriber_dictionary
from .np_to_opencv import NpCam
from displayarray.uid import uid_for_source
from typing import Union, Tuple
def pub_cam_loop(
cam_id: Union[int, str],
request_size: Tuple[int, int] = (1280, 720),
high_speed: bool = False,
fps_limit: float = 240,
) -> bool:
"""
Publish whichever camera you select to CVCams.<cam_id>.Vid.
You can send a quit command 'quit' to CVCams.<cam_id>.Cmd
Status information, such as failure to open, will be posted to CVCams.<cam_id>.Status
:param high_speed: Selects mjpeg transferring, which most cameras seem to support, so speed isn't limited
:param fps_limit: Limits the frames per second.
:param cam_id: An integer representing which webcam to use, or a string representing a video file.
:param request_size: A tuple with width, then height, to request the video size.
:return: True if loop ended normally, False if it failed somehow.
"""
name = uid_for_source(cam_id)
if isinstance(cam_id, (int, str)):
cam: Union[NpCam, cv2.VideoCapture] = cv2.VideoCapture(cam_id)
elif isinstance(cam_id, np.ndarray):
cam = NpCam(cam_id)
else:
raise TypeError(
"Only strings or ints representing cameras, or numpy arrays representing pictures supported."
)
subscriber_dictionary.register_cam(name)
# cam.set(cv2.CAP_PROP_CONVERT_RGB, 0)
frame_counter = 0
sub = subscriber_dictionary.cam_cmd_sub(name)
sub.return_on_no_data = ""
msg = ""
if high_speed:
cam.set(cv2.CAP_PROP_FOURCC, cv2.CAP_OPENCV_MJPEG)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, request_size[0])
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, request_size[1])
if not cam.isOpened():
subscriber_dictionary.CV_CAMS_DICT[name].status_pub.publish("failed")
return False
now = time.time()
while msg != "quit":
time.sleep(1.0 / (fps_limit - (time.time() - now)))
now = time.time()
(ret, frame) = cam.read() # type: Tuple[bool, np.ndarray ]
if ret is False or not isinstance(frame, np.ndarray):
cam.release()
subscriber_dictionary.CV_CAMS_DICT[name].status_pub.publish("failed")
return False
if cam.get(cv2.CAP_PROP_FRAME_COUNT) > 0:
frame_counter += 1
if frame_counter >= cam.get(cv2.CAP_PROP_FRAME_COUNT):
frame_counter = 0
cam = cv2.VideoCapture(cam_id)
subscriber_dictionary.CV_CAMS_DICT[name].frame_pub.publish(frame)
msg = sub.get()
sub.release()
cam.release()
return True
def pub_cam_thread(
cam_id: Union[int, str],
request_ize: Tuple[int, int] = (1280, 720),
high_speed: bool = False,
fps_limit: float = 240,
) -> threading.Thread:
"""Run pub_cam_loop in a new thread."""
t = threading.Thread(
target=pub_cam_loop, args=(cam_id, request_ize, high_speed, fps_limit)
)
t.start()
return t
@@ -1,100 +0,0 @@
import threading
from typing import Union, Tuple, Any, Callable, List, Optional
import numpy as np
from displayarray.callbacks import global_cv_display_callback
from displayarray.uid import uid_for_source
from displayarray.frame_publising import subscriber_dictionary
from displayarray.frame_publising.frame_publishing import pub_cam_thread
from displayarray.subscriber_window import window_commands
FrameCallable = Callable[[np.ndarray], Optional[np.ndarray]]
class VideoHandlerThread(threading.Thread):
"""Thread for publishing frames from a video source."""
def __init__(
self,
video_source: Union[int, str, np.ndarray] = 0,
callbacks: Optional[Union[List[FrameCallable], FrameCallable]] = None,
request_size: Tuple[int, int] = (-1, -1),
high_speed: bool = True,
fps_limit: float = 240,
):
super(VideoHandlerThread, self).__init__(target=self.loop, args=())
self.cam_id = uid_for_source(video_source)
self.video_source = video_source
if callbacks is None:
callbacks = []
if callable(callbacks):
self.callbacks = [callbacks]
else:
self.callbacks = callbacks
self.request_size = request_size
self.high_speed = high_speed
self.fps_limit = fps_limit
self.exception_raised = None
def __wait_for_cam_id(self):
while str(self.cam_id) not in subscriber_dictionary.CV_CAMS_DICT:
continue
def __apply_callbacks_to_frame(self, frame):
if frame is not None:
frame_c = None
for c in self.callbacks:
try:
frame_c = c(frame)
except TypeError as te:
raise TypeError(
"Callback functions for cvpubsub need to accept two arguments: array and uid"
)
except Exception as e:
self.exception_raised = e
frame = frame_c = self.exception_raised
subscriber_dictionary.stop_cam(self.cam_id)
window_commands.quit()
raise e
if frame_c is not None:
global_cv_display_callback(frame_c, self.cam_id)
else:
global_cv_display_callback(frame, self.cam_id)
def loop(self):
"""Continually get frames from the video publisher, run callbacks on them, and listen to commands."""
t = pub_cam_thread(
self.video_source, self.request_size, self.high_speed, self.fps_limit
)
self.__wait_for_cam_id()
sub_cam = subscriber_dictionary.cam_frame_sub(str(self.cam_id))
sub_owner = subscriber_dictionary.handler_cmd_sub(str(self.cam_id))
msg_owner = sub_owner.return_on_no_data = ""
while msg_owner != "quit":
frame = sub_cam.get(blocking=True, timeout=1.0) # type: np.ndarray
self.__apply_callbacks_to_frame(frame)
msg_owner = sub_owner.get()
sub_owner.release()
sub_cam.release()
subscriber_dictionary.stop_cam(self.cam_id)
t.join()
def display(self, callbacks: List[Callable[[np.ndarray], Any]] = None):
"""
Start default display operation.
For multiple video sources, please use something outside of this class.
:param callbacks: List of callbacks to be run on frames before displaying to the screen.
"""
from displayarray.subscriber_window import SubscriberWindows
if callbacks is None:
callbacks = []
self.start()
SubscriberWindows(video_sources=[self.cam_id], callbacks=callbacks).loop()
self.join()
if self.exception_raised is not None:
raise self.exception_raised
+26 -7
View File
@@ -1,4 +1,6 @@
from displayarray.subscriber_window import window_commands
"""Decorators for creating input loops that OpenCV handles."""
from displayarray.window import window_commands
import threading
import time
@@ -9,6 +11,7 @@ class MouseEvent(object):
"""Holds all the OpenCV mouse event information."""
def __init__(self, event, x, y, flags, param):
"""Create an OpenCV mouse event."""
self.event = event
self.x = x
self.y = y
@@ -62,10 +65,19 @@ class _mouse_loop_thread(object): # NOSONAR
class mouse_loop(object): # NOSONAR
"""Run a function on mouse information that is received by the window, continuously in a new thread."""
"""
Run a function on mouse information that is received by the window, continuously in a new thread.
def __init__(self, f, run_when_no_events=False):
self.t = threading.Thread(target=_mouse_loop_thread(f, run_when_no_events))
>>> @mouse_loop
... def fun(mouse_event):
... print("x:{}, y:{}".format(mouse_event.x, mouse_event.y))
"""
def __init__(self, f):
"""Start a new mouse thread for the decorated function."""
self.t = threading.Thread(
target=_mouse_loop_thread(f, run_when_no_events=False)
)
self.t.start()
def __call__(self, *args, **kwargs):
@@ -111,10 +123,17 @@ class _key_loop_thread(object): # NOSONAR
class key_loop(object): # NOSONAR
"""Run a function on mouse information that is received by the window, continuously in a new thread."""
"""
Run a function on mouse information that is received by the window, continuously in a new thread.
def __init__(self, f: Callable[[str], None], run_when_no_events=False):
self.t = threading.Thread(target=_key_loop_thread(f, run_when_no_events))
>>> @key_loop
... def fun(key):
... print("key pressed:{}".format(key))
"""
def __init__(self, f: Callable[[str], None]):
"""Start a new key thread for the decorated function."""
self.t = threading.Thread(target=_key_loop_thread(f, run_when_no_events=False))
self.t.start()
def __call__(self, *args, **kwargs):

Some files were not shown because too many files have changed in this diff Show More