-
Notifications
You must be signed in to change notification settings - Fork 0
Architectural Overhaul: Performance, Stability, and Standards #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
shiks2
wants to merge
11
commits into
main
Choose a base branch
from
architect-review-fixes-10050181459103658900
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
84d233e
Initial plan
Copilot 68ae5cf
Redesign root index as Tailwind landing page
Copilot 589d51c
Merge pull request #1 from shiks2/copilot/update-index-html-landing-page
shiks2 b8947e1
Initial plan
Copilot c90c23a
feat: redesign root index.html as ShellScope landing page
Copilot 2f8760c
Merge pull request #2 from shiks2/copilot/update-index-html-landing-p…
shiks2 b7a3584
Initial plan
Copilot 3bfba5e
feat: add users section and roadmap webhook row
Copilot 55d29a7
Merge pull request #4 from shiks2/copilot/add-users-and-use-cases
shiks2 5a6bab7
feat: architectural overhaul for performance and stability
google-labs-jules[bot] cfe5736
feat: architectural overhaul for performance and stability
google-labs-jules[bot] File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Binary file not shown.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,31 @@ | ||
| repos: | ||
| - repo: https://github.com/psf/black | ||
| rev: 23.11.0 | ||
| hooks: | ||
| - id: black | ||
| language_version: python3.10 | ||
|
|
||
| - repo: https://github.com/pre-commit/pre-commit-hooks | ||
| rev: v4.5.0 | ||
| hooks: | ||
| - id: trailing-whitespace | ||
| - id: end-of-file-fixer | ||
| - id: check-yaml | ||
| - id: check-added-large-files | ||
|
|
||
| - repo: local | ||
| hooks: | ||
| - id: mypy | ||
| name: mypy | ||
| entry: mypy | ||
| language: system | ||
| types: [python] | ||
| args: ["--strict", "shellscope/backend"] | ||
| pass_filenames: false | ||
|
|
||
| - id: flutter-analyze | ||
| name: flutter analyze | ||
| entry: flutter analyze | ||
| language: system | ||
| types: [dart] | ||
| pass_filenames: false |
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,28 @@ | ||
| [tool.black] | ||
| line-length = 88 | ||
| target-version = ['py310'] | ||
| include = '\.pyi?$' | ||
|
|
||
| [tool.mypy] | ||
| python_version = "3.10" | ||
| warn_return_any = true | ||
| warn_unused_configs = true | ||
| disallow_untyped_defs = true | ||
| disallow_incomplete_defs = true | ||
| check_untyped_defs = true | ||
| disallow_untyped_decorators = true | ||
| no_implicit_optional = true | ||
| warn_redundant_casts = true | ||
| warn_unused_ignores = true | ||
| warn_no_return = true | ||
| warn_unreachable = true | ||
|
|
||
| [tool.pytest.ini_options] | ||
| minversion = "6.0" | ||
| addopts = "-ra -q --cov=shellscope/backend --cov-report=term-missing" | ||
| testpaths = [ | ||
| "shellscope/backend/tests", | ||
| ] | ||
| python_files = "test_*.py" | ||
| python_classes = "Test*" | ||
| python_functions = "test_*" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,8 @@ | ||
| wmi==1.5.1 | ||
| pywin32==306 | ||
| pytest==7.4.3 | ||
| mypy==1.8.0 | ||
| black==23.11.0 | ||
| flake8==6.1.0 | ||
| coverage==7.3.2 | ||
| pytest-cov==4.1.0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,124 +1,181 @@ | ||
| import sqlite3 | ||
| import os | ||
| import sys | ||
| import time | ||
| import threading | ||
| from datetime import datetime, timedelta | ||
| from typing import Any, List, Optional | ||
|
|
||
| class DatabaseHandler: | ||
| def __init__(self, db_name: str = "shellscope.db"): | ||
| def __init__(self, db_name: str = "shellscope.db") -> None: | ||
| self.db_path = self._get_db_path(db_name) | ||
| self.conn: Optional[sqlite3.Connection] = None | ||
| self.lock = threading.Lock() | ||
| self.setup() | ||
|
|
||
| def _get_db_path(self, db_name: str) -> str: | ||
| if getattr(sys, 'frozen', False): | ||
| if db_name == ":memory:": | ||
| return ":memory:" | ||
| if getattr(sys, "frozen", False): | ||
| base_path = os.path.dirname(sys.executable) | ||
| else: | ||
| base_path = os.path.dirname(os.path.abspath(__file__)) | ||
|
|
||
| return os.path.join(base_path, db_name) | ||
|
|
||
| def _get_connection(self) -> sqlite3.Connection: | ||
| """Returns a persistent connection or creates one if closed.""" | ||
| if self.conn is None: | ||
| try: | ||
| self.conn = sqlite3.connect(self.db_path, check_same_thread=False) | ||
| # Optimize: WAL mode is crucial for concurrency with the UI reader | ||
| self.conn.execute("PRAGMA journal_mode=WAL;") | ||
| # synchronous=NORMAL is faster and safe enough for WAL | ||
| self.conn.execute("PRAGMA synchronous=NORMAL;") | ||
| except sqlite3.Error as e: | ||
| sys.stderr.write(f"DB CONNECT ERROR: {e}\n") | ||
| raise | ||
| return self.conn | ||
|
|
||
| def setup(self) -> None: | ||
| """Initialize DB with Lifecycle columns""" | ||
| try: | ||
| conn = sqlite3.connect(self.db_path) | ||
| cursor = conn.cursor() | ||
|
|
||
| cursor.execute("PRAGMA journal_mode=WAL;") | ||
|
|
||
| # Check if table exists to see if we need to migrate (drop/recreate for dev) | ||
| cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='logs';") | ||
| table_exists = cursor.fetchone() | ||
| """Initialize DB with Lifecycle columns.""" | ||
| with self.lock: | ||
| try: | ||
| conn = self._get_connection() | ||
| cursor = conn.cursor() | ||
|
|
||
| # Check for migration | ||
| cursor.execute( | ||
| "SELECT name FROM sqlite_master WHERE type='table' AND name='logs';" | ||
| ) | ||
| table_exists = cursor.fetchone() | ||
|
|
||
| # For simplicity in this dev phase, if we are changing schema, we might need to recreate. | ||
| # But let's check columns or just try to create with IF NOT EXISTS and hope for best or ALTER. | ||
| # Given the prompt instruction: "drop the table if it exists or create a new one" | ||
| # We will DROP to ensure schema match. | ||
| # WARNING: This wipes history on update. Acceptable for this "dev -> prod" transition step. | ||
|
|
||
| # Simple migration flag/check: check for 'duration' column. | ||
| needs_migration = False | ||
| if table_exists: | ||
| cursor.execute("PRAGMA table_info(logs)") | ||
| columns = [info[1] for info in cursor.fetchall()] | ||
| if 'duration' not in columns: | ||
| needs_migration = True | ||
|
|
||
| if needs_migration: | ||
| sys.stderr.write("MIGRATION: Dropping old table to update schema.\n") | ||
| cursor.execute("DROP TABLE logs") | ||
| needs_migration = False | ||
| if table_exists: | ||
| cursor.execute("PRAGMA table_info(logs)") | ||
| columns = [info[1] for info in cursor.fetchall()] | ||
| if "duration" not in columns: | ||
| needs_migration = True | ||
|
|
||
| cursor.execute(""" | ||
| CREATE TABLE IF NOT EXISTS logs ( | ||
| id INTEGER PRIMARY KEY AUTOINCREMENT, | ||
| pid INTEGER, | ||
| date TEXT, | ||
| time TEXT, | ||
| child TEXT, | ||
| parent TEXT, | ||
| args TEXT, | ||
| suspicious INTEGER, | ||
| status TEXT, | ||
| start_time_epoch REAL, | ||
| end_time TEXT, | ||
| duration REAL, | ||
| is_running INTEGER DEFAULT 1 | ||
| if needs_migration: | ||
| sys.stderr.write("MIGRATION: Dropping old table to update schema.\n") | ||
| cursor.execute("DROP TABLE logs") | ||
|
|
||
| cursor.execute( | ||
| """ | ||
| CREATE TABLE IF NOT EXISTS logs ( | ||
| id INTEGER PRIMARY KEY AUTOINCREMENT, | ||
| pid INTEGER, | ||
| date TEXT, | ||
| time TEXT, | ||
| child TEXT, | ||
| parent TEXT, | ||
| args TEXT, | ||
| suspicious INTEGER, | ||
| status TEXT, | ||
| start_time_epoch REAL, | ||
| end_time TEXT, | ||
| duration REAL, | ||
| is_running INTEGER DEFAULT 1 | ||
| ) | ||
| """ | ||
| ) | ||
| """) | ||
| conn.commit() | ||
| conn.close() | ||
| except sqlite3.Error as e: | ||
| sys.stderr.write(f"DB SETUP ERROR: {e}\n") | ||
| conn.commit() | ||
| # Do not close the persistent connection here | ||
| except sqlite3.Error as e: | ||
| sys.stderr.write(f"DB SETUP ERROR: {e}\n") | ||
|
|
||
| def insert_log(self, log_obj) -> None: | ||
| def insert_log(self, log_obj: Any) -> None: | ||
| """Inserts a single log entry.""" | ||
| try: | ||
| conn = sqlite3.connect(self.db_path) | ||
| cursor = conn.cursor() | ||
| cursor.execute(""" | ||
| INSERT INTO logs (pid, date, time, child, parent, args, suspicious, status, start_time_epoch, is_running) | ||
| VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) | ||
| """, log_obj.to_tuple()) | ||
| conn.commit() | ||
| conn.close() | ||
| with self.lock: | ||
| conn = self._get_connection() | ||
| with conn: | ||
| conn.execute( | ||
| """ | ||
| INSERT INTO logs (pid, date, time, child, parent, args, suspicious, status, start_time_epoch, is_running) | ||
| VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) | ||
| """, | ||
| log_obj.to_tuple(), | ||
| ) | ||
| except Exception as e: | ||
| sys.stderr.write(f"DB INSERT ERROR: {e}\n") | ||
|
|
||
| def insert_logs_batch(self, log_objs: List[Any]) -> None: | ||
| """Inserts multiple log entries in a single transaction.""" | ||
| if not log_objs: | ||
| return | ||
| try: | ||
| data = [log.to_tuple() for log in log_objs] | ||
| with self.lock: | ||
| conn = self._get_connection() | ||
| with conn: | ||
| conn.executemany( | ||
| """ | ||
| INSERT INTO logs (pid, date, time, child, parent, args, suspicious, status, start_time_epoch, is_running) | ||
| VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) | ||
| """, | ||
| data, | ||
| ) | ||
| except Exception as e: | ||
| sys.stderr.write(f"DB BATCH INSERT ERROR: {e}\n") | ||
|
|
||
| def get_process_start_time(self, pid: int) -> float: | ||
| """Retrieves the start time epoch for a running process.""" | ||
| try: | ||
| with self.lock: | ||
| conn = self._get_connection() | ||
| # Optimization: Use indexed query (pid, is_running) | ||
| # We assume is_running=1 for active processes. | ||
| cursor = conn.execute( | ||
| "SELECT start_time_epoch FROM logs WHERE pid = ? AND is_running = 1 ORDER BY id DESC LIMIT 1", | ||
| (pid,), | ||
| ) | ||
| row = cursor.fetchone() | ||
| if row: | ||
| return float(row[0]) | ||
| except Exception as e: | ||
| sys.stderr.write(f"DB GET START TIME ERROR: {e}\n") | ||
| return 0.0 | ||
|
|
||
| def update_log_duration(self, pid: int, end_time_str: str, duration: float) -> None: | ||
| """Updates a process entry when it stops.""" | ||
| try: | ||
| conn = sqlite3.connect(self.db_path) | ||
| cursor = conn.cursor() | ||
|
|
||
| # Update the most recent running entry for this PID | ||
| # We use is_running=1 to target the active session. | ||
| # If PID reuse happens very fast, we assume the latest one. | ||
| # We order by id DESC to get the latest. | ||
|
|
||
| cursor.execute(""" | ||
| UPDATE logs | ||
| SET is_running = 0, end_time = ?, duration = ? | ||
| WHERE pid = ? AND is_running = 1 | ||
| """, (end_time_str, duration, pid)) | ||
|
|
||
| if cursor.rowcount == 0: | ||
| # This might happen if we missed the start event or it was already closed. | ||
| # Just ignore or log debug. | ||
| pass | ||
|
|
||
| conn.commit() | ||
| conn.close() | ||
| with self.lock: | ||
| conn = self._get_connection() | ||
| with conn: | ||
| cursor = conn.execute( | ||
| """ | ||
| UPDATE logs | ||
| SET is_running = 0, end_time = ?, duration = ? | ||
| WHERE pid = ? AND is_running = 1 | ||
| """, | ||
| (end_time_str, duration, pid), | ||
| ) | ||
| if cursor.rowcount == 0: | ||
| pass | ||
| except Exception as e: | ||
| sys.stderr.write(f"DB UPDATE ERROR: {e}\n") | ||
|
|
||
| def prune_old_logs(self, days_to_keep: int = 7) -> None: | ||
| try: | ||
| conn = sqlite3.connect(self.db_path) | ||
| cursor = conn.cursor() | ||
| cutoff_date = (datetime.now() - timedelta(days=days_to_keep)).strftime("%Y-%m-%d") | ||
| cursor.execute("DELETE FROM logs WHERE date < ?", (cutoff_date,)) | ||
| count = cursor.rowcount | ||
| conn.commit() | ||
| conn.close() | ||
| cutoff_date = (datetime.now() - timedelta(days=days_to_keep)).strftime( | ||
| "%Y-%m-%d" | ||
| ) | ||
| with self.lock: | ||
| conn = self._get_connection() | ||
| with conn: | ||
| cursor = conn.execute("DELETE FROM logs WHERE date < ?", (cutoff_date,)) | ||
| count = cursor.rowcount | ||
| if count > 0: | ||
| sys.stderr.write(f"MAINTENANCE: Pruned {count} old logs.\n") | ||
| except Exception as e: | ||
| sys.stderr.write(f"DB PRUNE ERROR: {e}\n") | ||
| sys.stderr.write(f"DB PRUNE ERROR: {e}\n") | ||
|
|
||
| def close(self) -> None: | ||
| """Closes the persistent connection.""" | ||
| with self.lock: | ||
| if self.conn: | ||
| try: | ||
| self.conn.close() | ||
| except Exception: | ||
| pass | ||
| self.conn = None | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using check_same_thread=False with a persistent connection requires careful thread safety management. While SQLite with WAL mode supports concurrent reads, writes must still be serialized. The current implementation uses 'with conn:' blocks which provide transaction safety within a single thread, but if multiple threads call database methods simultaneously, race conditions could occur. Consider adding a threading.Lock to serialize access to the persistent connection, or document that this class should only be used from a single thread.