Shiya in Paris

Shiya Lilith Huber

AI Ethics Researcher & Strategic Systems Thinker

Based in Paris & Munich, focused on AI alignment, deceptive behavior in language models, and high-impact system design

Shiya Lilith Huber

About Me

Independent researcher focused on AI ethics, deceptive behavior in language models, and high-impact system design. I attended Raise 2025 in Paris where I gained insight into some of the sharpest minds in AI.

I also participated in an AI event hosted by Snowflake at Station F that focused on innovation, data infrastructure, and emerging AI safety use cases. Currently preparing for university entry in AI ethics with long-term goals in alignment and human-technology interaction.

Paris, Munich (relocatable)

Research & Expertise

Exploring the intersection of AI ethics, alignment research, and strategic systems thinking

Core Skills & Tools

Strategic Systems ThinkingAI Ethics & Risk AnalysisLLM Prompt Engineering (LM Studio)Behavioral Modeling & Influence StrategyResearch Writing (Less Wrong style)Governance & Interpretability ConceptsFluent in English & German, currently learning French

Projects & Research

2025
The Illusion of Alignment
Essay

Analyzed deceptive model behavior, mesa-optimization, and oversight illusion in language models. Inspired by Anthropic, MATS research, and LessWrong discussions.

In Progress
Strategic Behavior in LLMs
Working Paper

Working paper on feedback loop manipulation in observed models. Investigates how attention-based architectures may reinforce deceptive alignment strategies.

2025
RAISE 2025 Attendance
Conference

Networked with AI ethics professionals and attended expert panels on alignment risk, model control, and safe AI deployment at Station F, Paris.

2025
XYZ Research Paper
Research Collaboration

Research collaboration across continents with Dr. Gadgil, assistant professor, Ph.D Cybersecurity. Cross-disciplinary work exploring AI security implications.

Education

Wirtschaftsschule Alpenland Bad Aibling, Bavaria

Intermediate school completed (July 2025)

Focus: Business Control, Mathematics, Ethics

Autodidactic research in AI alignment and behavioral safety

Affiliations & Interests
  • Member, Association for the Advancement of Artificial Intelligence (AAAI)
  • Active contributor to strategic discourse on alignment, safety, and cognitive deception
  • Interests: Existential risk, system design, martial arts, chess, and systems manipulation

Let's Connect

Interested in AI ethics research, collaboration, or just want to discuss alignment challenges? I'd love to hear from you.

Get in Touch
Feel free to reach out through any of these channels

Location

Paris, Munich (relocatable)

Send a Message
I'll get back to you as soon as possible