Cyber Threat Intelligence

Understanding how cyber threats emerge, scale, and intersect with geopolitical, criminal, and institutional dynamics, with a focus on attribution, escalation patterns, and decision impact in high-risk environments.

Forensic Oversight & Evidence Interpretation

From digital artefacts to accountable decisions: forensic oversight and evidence-based interpretation supporting investigations, litigation, and governance processes in cyber and crypto-financial contexts.

AI & Law · Ethical & Regulatory Tech

Analysing the legal, ethical, and governance implications of artificial intelligence, from disinformation and automated influence to surveillance, accountability, and decision-making systems.

Strategic Publications & Civic Initiatives

Scholarly publications and public initiatives addressing digital vulnerability, autonomy, and governance, bridging academic research, civic protection, and strategic intelligence practice.

About me

Investigating the logic behind digital risk

I examine digital risk at the intersection of technology, law, and power, focusing on how cyber threats evolve within geopolitical, criminal, and institutional contexts.

From dark web intelligence and crypto-financial tracing to forensic analysis, my work connects evidence, intelligence, and accountability, translating complex threat ecosystems into insights that can support governance and decision-making.

I support institutions, organisations, and legal professionals in understanding how digital risk materialises, how responsibility can be attributed, and how resilience can be built in environments shaped by automation and artificial intelligence.

Based in Italy, I bring over twenty years of experience across digital forensics, cybersecurity, and legal-technical analysis, working at the intersection of investigation, regulation, and strategic oversight.

My professional path began as the first woman in Italy to obtain the CIFI certification. After founding Brixia Forensics Institute, I have served as a court-appointed expert in digital forensics, contributing to both investigative and preventive contexts and bridging forensic computer science with legal and strategic reasoning.

Alongside my professional practice, I continue to develop my expertise in cyber threat intelligence and AI-amplified risk governance, including ongoing preparation for the Certified Threat Intelligence Analyst (CTIA) certification.


Research Focus & Strategic Domains

Each research area reflects a strategic commitment to understanding how digital threats, global power dynamics, and financial crime intersect in today’s hyperconnected world.
These interests guide my work at the crossroads of technology, intelligence, and legal accountability.

Democracy & disinformation: AI manipulation and the digital contest for trust

Researching how AI, deepfakes, and information warfare reshape democratic processes, influence public perception, and destabilise trust. Focus on e-voting security, cognitive manipulation, and the strategic role of technology in shaping political and institutional vulnerability.

Dark Web, crypto forensics & grey zones: mapping the unseen threatscape

Studying the convergence of digital crime, financial opacity, and intelligence infrastructure.
Key topics include dark web forensics, crypto tracing, anti-money laundering (AML), zero-day markets, and the legal asymmetries within high-risk, decentralised ecosystems.

Embedded threats & digital exposure: forensics at the edge of the real

Analysing vulnerabilities at the intersection of device, user and system.
Topics include mobile and vehicle forensics, drone-based threats, cyber harassment, and exposure risks tied to telemetry, IoT, and behavioural surveillance.

Projects&Initiatives

Exploring critical domains at the intersection of technology, power, evidence, and governance.

Brixia Forensics Institute

Brixia Forensics Institute was founded in 2008 following a top-ranking public innovation project and has operated for over a decade at the forefront of digital forensics and evidentiary analysis. It has supported investigations, litigation, and institutional accountability through expert reporting, attribution work, and forensic interpretation. This experience established the methodological foundation that informs my current approach to intelligence, risk governance, and decision accountability across cyber, financial crime, and regulatory.

Toralya was launched in 2025 as a boutique AI research and advisory initiative focused on governance, accountability, and executive decision-making in AI-amplified risk environments. It supports executive, legal, and regulatory stakeholders through research-driven analysis and governance-oriented insight addressing how artificial intelligence reshapes risk exposure, decision authority, and institutional responsibility across complex digital and financial contexts.
In the second half of January 2026, Toralya transitions to a DMCC-licensed company, strengthening its organisational structure while preserving full continuity in scope, methodology, and advisory standards.

Jeopardies

Jeopardies is a research-driven initiative examining cyber geopolitics, strategic exposure, and systemic digital risk at the intersection of technology, power, and intelligence. Through analytical research and scenario-based assessment, it explores how digital infrastructures, cyber operations, and information control shape geopolitical competition and institutional vulnerability. From 2026 onward, the initiative focuses on AI-enabled strategic risk, including influence operations, escalation dynamics, and governance challenges posed by decision making.

A civic initiative addressing digital violence, manipulation, and exposure risks affecting women and vulnerable groups. Through tools, awareness, and strategic analysis, it connects civic protection with broader questions of digital governance, autonomy, and resilience. From 2026 onward, the initiative focuses on AI governance in the context of gender-based online hate, algorithmic amplification of abuse, and the protection of vulnerable subjects in digital environments. It frames AI governance as systemic responsibility, not feature.

Latest articles and insights

Thinking critically at the edge of complexity.Analyses and reflections on the forces shaping our digital future.

As AI becomes embedded across enterprise decision-making, governance is increasingly framed as a board-level responsibility. However, AI authority cannot be sustained without forensic readiness and cyber-risk awareness. This article examines why enterprise AI governance in 2026 must be grounded in digital forensics to ensure demonstrable accountability, auditability, and decision legitimacy under pressure. It argues that cybersecurity and forensics are no longer technical support functions, but core governance infrastructure. Without them, organisations may automate faster, but they lose the ability to retain authority when decisions are contested.
AI governance is increasingly defined through ethical principles, regulatory frameworks, and organisational policies. However, as AI systems operate within contested digital environments, governance models that ignore cyber risk and forensic realities prove structurally inadequate. This article argues that effective AI governance in 2026 requires a shift from abstract frameworks to adversarial-aware control structures. By integrating cyber intelligence and forensic reasoning, organisations can design governance models capable of withstanding manipulation, system degradation, and post-incident scrutiny. Without this foundation, AI governance remains aspirational rather than enforceable, particularly in high-risk, automated decision-making contexts.
This article examines the evolution of AI governance in cybersecurity at the start of 2026, focusing on decision rights and control architecture as foundational mechanisms for accountable automation. As AI-driven systems increasingly participate in security decisions, ranging from threat detection to autonomous response, traditional governance models based on principles and static compliance prove insufficient. The analysis argues that effective AI governance depends on clearly defined decision rights, enforceable control boundaries, and the ability to reconstruct and audit AI-enabled actions under pressure. By integrating governance directly into cybersecurity control architectures, organisations can align automation with accountability, reduce systemic cyber risk, and ensure regulatory and institutional defensibility. The article offers a forward-looking, evidence-based perspective on how AI governance must evolve to remain credible, resilient, and operationally effective in high-risk digital environments.

Contact me

Licensed by DMCC – Dubai, UAE

 I engage in governance-focused advisory activities, strategic research exchange, and institutional dialogue related to AI risk, cybersecurity, and digital regulation.
Email: info@toralya.io


Licensed by DMCC – Dubai, UAE


All messages are read personally. I will get back to you as soon as possible.