This is a role for an engineer who wants to tackle the hardest unsolved problems in software development - where AI systems write code, deploy software, and make decisions at machine speed.
You’ll be joining a globally recognised technology company operating at the forefront of AI, security, and developer tooling, building the control layer that ensures AI-generated software is safe, auditable, and reliable.
You’ll explore new ideas before they become products, build prototypes that test assumptions, and generate the technical intelligence that shapes the next generation of secure software development.
This is a founding R&D role. It’s a role for a builder who thrives in ambiguity and enjoys solving problems that don’t have clear answers yet.
Why This Role Exists:
Software development is changing rapidly. AI is writing more code than ever before. Systems are becoming more autonomous. Security and governance are becoming critical infrastructure.
Organisations need engineers who can:
- Explore emerging technology
- Design secure systems
- Build trustworthy software
- Move faster than the market
- Prototype quickly
- Think independently
- Challenge assumptions
- Explore new ideas
- Ship working solutions
This role sits right at the centre of that transformation.
What You’ll Be Doing:
Solving the Hard Problems:
You’ll lead structured discovery into complex technical challenges like:
- Governing AI agents writing production code
- Detecting and managing risk in AI-generated software
- Building secure developer workflows for AI-native systems
- Designing signals and controls across modern engineering platforms
- Establishing trust and traceability in automated software delivery
You’ll build working prototypes, test assumptions quickly, and provide clear technical direction on what to build next; or what to stop.
Push Into the Whitespace:
You’ll explore emerging areas of technology before they are widely understood. Think:
- Agentic software development
- Secure code generation
- AI governance and risk intelligence
- Developer workflow automation
- Platform-level security and trust models
This is forward-looking work where experimentation matters as much as execution.
Keep the Organisation Pointing Forward:
You’ll track how developers are actually working, monitor new technologies, and turn technical insights into practical guidance that shapes product strategy.
Your work will influence;
- Engineering direction
- Product architecture
- Security standards
- Industry thinking
Core Technical Background
- AI / LLM systems or agent frameworks
- Developer tooling or platform engineering
- Application security or secure software development
- Distributed systems or event-driven architectures
- Observability, CI/CD, or developer workflow automation
Technology Exposure
- Python, Go, Rust, TypeScript, or Java
- Kubernetes, Docker, or cloud-native platforms
- GitHub Actions, CI/CD pipelines, or developer platforms
- Security tooling or vulnerability detection systems
- AI frameworks, RAG systems, or model integration workflows
Please apply below or reach out to Ruby to discuss in more detail - ruby@theonset.com.au