Research · Analysis · System Behavior
Researching how complex systems actually fail —
before attackers or outages force the lesson.
DevPsh research focuses on adversarial behavior, systemic risk, and the unintended consequences of scale, automation, and complexity.
How we approach research
Our research does not begin with theory. It begins with real systems, real failures, and real adversaries.
We study how technology behaves under pressure — where assumptions break, controls degrade, and complexity obscures risk.
Each research topic below expands into deeper analysis.
Systemic risk in cloud-native identity models
Why identity has become the primary attack surface in modern cloud environments.
Cloud-native systems rely heavily on identity-driven access rather than traditional network boundaries.
As organizations scale, identity relationships multiply across services, accounts, pipelines, and automation frameworks.
Our research shows that risk rarely comes from a single misconfiguration, but from the interaction of trust assumptions across identity boundaries.
Attackers exploit these implicit trust paths to move laterally without triggering conventional alerts.
Effective defense requires visibility into identity flows, privilege inheritance, and non-obvious trust relationships.
Why security controls decay over time
Understanding how once-effective defenses silently lose relevance.
Security controls are often designed for a snapshot in time — a specific architecture, threat model, or regulatory requirement.
Over time, systems evolve faster than controls are reassessed.
Our research highlights how control effectiveness degrades due to automation, operational shortcuts, and changing attacker incentives.
This decay is rarely visible until a real incident occurs.
Continuous validation and adversarial testing are the only reliable methods to counter control drift.
AI systems as new risk amplifiers
How intelligent systems introduce novel failure modes at scale.
AI systems are often treated as isolated models rather than integrated components within larger systems.
Our research examines how data pipelines, model access, and feedback loops can unintentionally amplify risk.
Threats include data poisoning, inference leakage, model abuse, and governance blind spots.
These risks are architectural, not algorithmic.
Organizations must secure AI systems as socio-technical systems — not just code artifacts.
Why incident response fails under pressure
Decision breakdowns during real-world security incidents.
Incident response plans often assume ideal conditions: clear signals, available experts, and sufficient time.
Our research shows that real incidents involve ambiguity, competing priorities, and incomplete information.
Under pressure, teams default to organizational behavior — not documented procedures.
Effective response depends on decision clarity, ownership, and rehearsal under realistic conditions.
Red teaming and simulation are essential research tools, not optional exercises.
Explore deeper research with DevPsh
Engage with our research to understand risk, system behavior, and resilience before failure occurs.
Connect