Skip to content
View SerenaGW's full-sized avatar
👩‍💻
Check the new version of my last report!
👩‍💻
Check the new version of my last report!

Block or report SerenaGW

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
SerenaGW/README.md

Hi, I'm Serena

I am an Independent Researcher focused on AI Systems and Logic Design. This GitHub is a space where I share my exercises, investigations, and the projects I develop while learning to use and understand Artificial Intelligence.

I don't have a traditional academic background. I am self-taught and operate on a Just-in-Time Learning model, acquiring technical knowledge as it becomes necessary to design a solution or analyze a model's limits.

My Path and Mission

This repository functions as a personal gallery for my growth in the field. I believe that understanding these technologies requires active use, construction, and an honest analysis of their boundaries.

My current approach focuses on systemic design, leveraging AI to bridge the gap between concepts and functional prototypes.

Featured Investigations and Projects

This project started as an experiment to digitize a symbolic language from my youth and evolved into a functional cryptographic system through a 4-day intensive sprint. By using AI to challenge the design, I integrated Lattice-based PRNGs (LWE) and PBKDF2 to ensure resilience. It serves as a practical exercise in architecture, showing how symbolic logic can be transformed into security infrastructure.

Research exploring the limits of LLM stability by investigating how low-entropy patterns, such as whitespace, can destabilize internal reasoning.

A study on how In-Context Learning (ICL) functions as a heuristic shortcut that can fundamentally alter a model's mathematical logic.

Methodology

Logic and Architecture I focus on designing the core logic and systemic flow, using AI to assist in technical stress-testing and implementation.

Learning through Construction I explore complex fields, such as cryptography, when they are the necessary building blocks for the projects I am working on.

Evolutionary Documentation I value the process of discovery. This includes the learning curve and the refinement of my own criteria, moving away from earlier drafting styles that I now consider over-embellished toward a more direct and grounded technical language.


This space documents my path, one experiment at a time


image

Pinned Loading

  1. ModulationofReasoninginLLMs ModulationofReasoninginLLMs Public

    This repository showcases research into the fundamental impact of low-cost in-context learning on the internal logic of Large Language Models (LLMs). By using different ICL guides

    1

  2. LLMReadteamSymbolic LLMReadteamSymbolic Public

    This repository showcases research into novel adversarial techniques for Large Language Models (LLMs), focusing on the use of a unique symbolic language combined with social engineering to identify…

    4 2

  3. LLMReadTeamLinguisticDoS LLMReadTeamLinguisticDoS Public

    Exploring "Semantic Re-signification": A novel red teaming technique that induces linguistic denial of service (DoS) and ethical misalignment in Large Language Models (LLMs) by manipulating their c…

    1

  4. RedTeamLowEnthropy RedTeamLowEnthropy Public

    This repository documents ongoing research into Low-Entropy Languages (LEL), an unconventional vector for red teaming large language models (LLMs). An LEL is defined as a non-traditional input syst…