wowlet-backend/wowlet_backend/routes.py

77 lines
2.1 KiB
Python
Raw Normal View History

2020-10-03 02:45:45 +00:00
# SPDX-License-Identifier: BSD-3-Clause
# Copyright (c) 2020, The Monero Project.
# Copyright (c) 2020, dsc@xmr.pm
2021-04-05 19:03:35 +00:00
import os
2020-10-03 02:45:45 +00:00
import asyncio
import json
2021-04-05 19:03:35 +00:00
from quart import websocket, jsonify, send_from_directory
2020-10-03 02:45:45 +00:00
2021-04-05 19:03:35 +00:00
import settings
from wowlet_backend.factory import app, broadcast
2021-04-05 17:49:02 +00:00
from wowlet_backend.wsparse import WebsocketParse
from wowlet_backend.utils import wowlet_data
2020-10-03 02:45:45 +00:00
@app.route("/")
async def root():
data = await wowlet_data()
Feather-ws rewrite; - Move recurring tasks into their own class; inherits from `FeatherTask` - CCS proposals: Don't use API, it's broken - webcrawl instead until it is fixed. - Switch to hypercorn as the ASGI server, *with* support for multiple workers. You can now run feather-ws with, for example, `--workers 6`. See `Dockerfile`. - Introduce support for various coins under `BlockheightTask` - Introduce support for various Reddit communities under `RedditTask` - Introduced weightvoting whilst validating third-party RPC blockheights - where nodes are filtered based on what other nodes are commonly reporting. - Current blockheights are fetched from various block explorers and weightvoting is done to eliminate outliers under `BlockheightTask`. - Don't filter/remove bad nodes from the rpc_nodes list; correctly label them as disabled/bad nodes. - Multiple Feather instances (each for it's own coin) can now run on one machine, using only one Redis instance, as each coins has it's own Redis database index. - Configuration options inside `settings.py` can now be controlled via environment variables. - Better logging through custom log formatting and correct usage of `app.logger.*` - Fixed a bug where one task could overlap with itself if the previous one did not finish yet. This was particularly noticable inside the `RPCNodeCheckTask` where the high timeout (for Tor nodes) could cause the task to run *longer* than the recurring task interval. - Introduced a `docker-compose.yml` to combine the Feather container with Redis and Tor containers. - Blocking IO operations are now done via `aiofiles`
2020-12-22 18:03:48 +00:00
return jsonify(data)
2020-10-03 02:45:45 +00:00
2021-04-05 19:03:35 +00:00
@app.route("/suchwow/<path:name>")
async def suchwow(name: str):
"""Download a SuchWow.xyz image"""
base = os.path.join(settings.cwd, "data", "suchwow")
return await send_from_directory(base, name)
2020-10-03 02:45:45 +00:00
@app.websocket('/ws')
2022-03-14 09:41:02 +00:00
async def ws():
data = await wowlet_data()
2020-10-03 02:45:45 +00:00
Feather-ws rewrite; - Move recurring tasks into their own class; inherits from `FeatherTask` - CCS proposals: Don't use API, it's broken - webcrawl instead until it is fixed. - Switch to hypercorn as the ASGI server, *with* support for multiple workers. You can now run feather-ws with, for example, `--workers 6`. See `Dockerfile`. - Introduce support for various coins under `BlockheightTask` - Introduce support for various Reddit communities under `RedditTask` - Introduced weightvoting whilst validating third-party RPC blockheights - where nodes are filtered based on what other nodes are commonly reporting. - Current blockheights are fetched from various block explorers and weightvoting is done to eliminate outliers under `BlockheightTask`. - Don't filter/remove bad nodes from the rpc_nodes list; correctly label them as disabled/bad nodes. - Multiple Feather instances (each for it's own coin) can now run on one machine, using only one Redis instance, as each coins has it's own Redis database index. - Configuration options inside `settings.py` can now be controlled via environment variables. - Better logging through custom log formatting and correct usage of `app.logger.*` - Fixed a bug where one task could overlap with itself if the previous one did not finish yet. This was particularly noticable inside the `RPCNodeCheckTask` where the high timeout (for Tor nodes) could cause the task to run *longer* than the recurring task interval. - Introduced a `docker-compose.yml` to combine the Feather container with Redis and Tor containers. - Blocking IO operations are now done via `aiofiles`
2020-12-22 18:03:48 +00:00
# blast available data on connect
for task_key, task_value in data.items():
if not task_value:
2020-10-03 02:45:45 +00:00
continue
Feather-ws rewrite; - Move recurring tasks into their own class; inherits from `FeatherTask` - CCS proposals: Don't use API, it's broken - webcrawl instead until it is fixed. - Switch to hypercorn as the ASGI server, *with* support for multiple workers. You can now run feather-ws with, for example, `--workers 6`. See `Dockerfile`. - Introduce support for various coins under `BlockheightTask` - Introduce support for various Reddit communities under `RedditTask` - Introduced weightvoting whilst validating third-party RPC blockheights - where nodes are filtered based on what other nodes are commonly reporting. - Current blockheights are fetched from various block explorers and weightvoting is done to eliminate outliers under `BlockheightTask`. - Don't filter/remove bad nodes from the rpc_nodes list; correctly label them as disabled/bad nodes. - Multiple Feather instances (each for it's own coin) can now run on one machine, using only one Redis instance, as each coins has it's own Redis database index. - Configuration options inside `settings.py` can now be controlled via environment variables. - Better logging through custom log formatting and correct usage of `app.logger.*` - Fixed a bug where one task could overlap with itself if the previous one did not finish yet. This was particularly noticable inside the `RPCNodeCheckTask` where the high timeout (for Tor nodes) could cause the task to run *longer* than the recurring task interval. - Introduced a `docker-compose.yml` to combine the Feather container with Redis and Tor containers. - Blocking IO operations are now done via `aiofiles`
2020-12-22 18:03:48 +00:00
await websocket.send(json.dumps({"cmd": task_key, "data": task_value}).encode())
2020-10-03 02:45:45 +00:00
async def rx():
while True:
Feather-ws rewrite; - Move recurring tasks into their own class; inherits from `FeatherTask` - CCS proposals: Don't use API, it's broken - webcrawl instead until it is fixed. - Switch to hypercorn as the ASGI server, *with* support for multiple workers. You can now run feather-ws with, for example, `--workers 6`. See `Dockerfile`. - Introduce support for various coins under `BlockheightTask` - Introduce support for various Reddit communities under `RedditTask` - Introduced weightvoting whilst validating third-party RPC blockheights - where nodes are filtered based on what other nodes are commonly reporting. - Current blockheights are fetched from various block explorers and weightvoting is done to eliminate outliers under `BlockheightTask`. - Don't filter/remove bad nodes from the rpc_nodes list; correctly label them as disabled/bad nodes. - Multiple Feather instances (each for it's own coin) can now run on one machine, using only one Redis instance, as each coins has it's own Redis database index. - Configuration options inside `settings.py` can now be controlled via environment variables. - Better logging through custom log formatting and correct usage of `app.logger.*` - Fixed a bug where one task could overlap with itself if the previous one did not finish yet. This was particularly noticable inside the `RPCNodeCheckTask` where the high timeout (for Tor nodes) could cause the task to run *longer* than the recurring task interval. - Introduced a `docker-compose.yml` to combine the Feather container with Redis and Tor containers. - Blocking IO operations are now done via `aiofiles`
2020-12-22 18:03:48 +00:00
buffer = await websocket.receive()
2020-10-03 02:45:45 +00:00
try:
Feather-ws rewrite; - Move recurring tasks into their own class; inherits from `FeatherTask` - CCS proposals: Don't use API, it's broken - webcrawl instead until it is fixed. - Switch to hypercorn as the ASGI server, *with* support for multiple workers. You can now run feather-ws with, for example, `--workers 6`. See `Dockerfile`. - Introduce support for various coins under `BlockheightTask` - Introduce support for various Reddit communities under `RedditTask` - Introduced weightvoting whilst validating third-party RPC blockheights - where nodes are filtered based on what other nodes are commonly reporting. - Current blockheights are fetched from various block explorers and weightvoting is done to eliminate outliers under `BlockheightTask`. - Don't filter/remove bad nodes from the rpc_nodes list; correctly label them as disabled/bad nodes. - Multiple Feather instances (each for it's own coin) can now run on one machine, using only one Redis instance, as each coins has it's own Redis database index. - Configuration options inside `settings.py` can now be controlled via environment variables. - Better logging through custom log formatting and correct usage of `app.logger.*` - Fixed a bug where one task could overlap with itself if the previous one did not finish yet. This was particularly noticable inside the `RPCNodeCheckTask` where the high timeout (for Tor nodes) could cause the task to run *longer* than the recurring task interval. - Introduced a `docker-compose.yml` to combine the Feather container with Redis and Tor containers. - Blocking IO operations are now done via `aiofiles`
2020-12-22 18:03:48 +00:00
blob = json.loads(buffer)
2020-10-03 02:45:45 +00:00
if "cmd" not in blob:
continue
cmd = blob.get('cmd')
_data = blob.get('data')
result = await WebsocketParse.parser(cmd, _data)
if result:
rtn = json.dumps({"cmd": cmd, "data": result}).encode()
await websocket.send(rtn)
except Exception as ex:
continue
async def tx():
async for _data in broadcast.subscribe():
payload = json.dumps(_data).encode()
2020-10-03 02:45:45 +00:00
await websocket.send(payload)
# bidirectional async rx and tx loops
consumer_task = asyncio.ensure_future(rx())
producer_task = asyncio.ensure_future(tx())
try:
await asyncio.gather(consumer_task, producer_task)
finally:
consumer_task.cancel()
producer_task.cancel()
@app.errorhandler(403)
@app.errorhandler(404)
@app.errorhandler(405)
@app.errorhandler(500)
def page_not_found(e):
return ":)", 500