Cigdem has more than ten years of experience in research and development in mobile and wireless networks in both academia and industry. She has been working on standards for building privacy and trust in the Internet of Things during her time at Nominet as a Senior Researcher (2015-2019). Between 2012-2015, she worked as a Senior Lecturer at Oxford Brookes University, where she lectured and conducted research on wireless and mobile networks, with a particular focus on energy and interference efficiency, and Internet of Robotic Things. From 2008-2012, she was with Telekom Innovation Labs (the main research unit of Deutsche Telekom) as a Senior Research Scientist leading projects on Wireless Mesh Networks. Her work has been published in more than 50 journal and conference publications. She is a Fulbright, Department of Computer Science, UIUC and Vodafone fellow. Cigdem is a passionate advocate of increasing diversity awareness in computing. She is the Communication Co-Chair of ACM Women. Between 2019-2022, she was the Communication and Outreach Chair of ACM Women-Europe. She collaborates with the Micro:bit Educational Foundation to support their mission of teaching coding to school children. She is the co-author of the Networking with the Micro:bit book.
Research Area
This PhD sits at the intersection of usable privacy, AI agents, human-centred security, and web interaction design. The project will explore how users’ privacy intentions can be meaningfully captured, interpreted, and enacted across complex digital systems. The focus is on AI-mediated privacy decision-making, particularly in everyday web interactions such as cookie consent banners, data-sharing prompts, authentication flows, and other consent and authorisation mechanisms.
This research extends established work in privacy-preserving protocols and usable security into the emerging domain of AI-mediated user agency. Building on foundations in IoT security standardisation (IETF ACE) and privacy-by-design principles, the project investigates how AI agents can operationalise user privacy preferences across heterogeneous web ecosystems while maintaining transparency and accountability.
Project Description
Modern web users are repeatedly asked to make privacy decisions—often under time pressure, cognitive overload, and through poorly designed interfaces. While regulations require transparency and consent, current mechanisms largely shift responsibility to users without offering meaningful support.
This PhD will explore how AI agents can act as privacy mediators between users and web services by:
- Interpreting high-level user privacy intent expressed in natural language, preferences, or behavioural patterns
- Mapping these intents to concrete, enforceable configurations (e.g., cookie choices, consent settings, authorisation scopes)
- Dynamically adapting decisions across websites, services, and contexts
- Supporting user agency, understanding, and trust rather than replacing decision-making
The research will investigate how such agents can operate ethically and transparently, balancing automation with user control. Concrete prototypes will address real web interaction scenarios, including:
- Consent flows and scope negotiation based on representative, real-world authorisation infrastructures such as OAuth/OpenID Connect
- Cookie consent management and third-party tracking configurations
- Cross-site privacy preference propagation
Key research questions may include:
- How can user privacy intent be represented in ways that are both machine-actionable and human-understandable?
- How can AI agents negotiate consent decisions on behalf of users while preserving agency and accountability?
- What design patterns support trust, contestability, and explainability in privacy-aware agents?
- How can such systems scale across heterogeneous web ecosystems?
Candidates are strongly encouraged to propose refinements to the research questions based on their interests and expertise.
Relevant Prior Work
The project builds on and is informed by prior work including:
Karen Renaud, Cigdem Sengul, Kovila Coopamootoo, Bryan Clift, Jacqui Taylor, Mark Springett, and Ben Morrison. 2024. “We’re Not That Gullible!” Revealing Dark Pattern Mental Models of 11-12-Year-Old Scottish Children. ACM Trans. Comput.-Hum. Interact. 31, 3, Article 33 (June 2024), 41 pages. https://doi.org/10.1145/3660342
Manohar, A., Sengul, C., Chen, J. (2023). Inclusive Privacy Control at Home for Smart Health. In: Hayes, S., Jopling, M., Connor, S., Johnson, M. (eds) Human Data Interaction, Disadvantage and Skills in the Community. Postdigital Science and Education . Springer, Cham. https://doi.org/10.1007/978-3-031-31875-7_9
Sengul, C. and Kirby, A.A. RFC 9431: Message Queuing Telemetry Transport (MQTT) and Transport Layer Security (TLS) Profile of Authentication and Authorization for Constrained Environments (ACE) Framework. RFC Editor. [Online] https://www.rfc-editor.org/rfc/rfc9431.html.
Why This Project Matters
This work addresses a critical gap between privacy regulation, technical enforcement, and lived user experience. By moving beyond static consent mechanisms, the project aims to contribute to:
- More inclusive and accessible privacy controls
- Reduced cognitive burden for users
- Practical pathways for responsible AI deployment in everyday digital life
- Evidence-based alternatives to dark-pattern-driven consent design
- Bridging the gap between privacy regulation and technical implementation
The outcomes are relevant to academia, standards bodies, regulators, and industry.
Methods and Skills You Will Develop
The project combines technical system-building with rigorous human-centred evaluation, and its methodological approaches will be tailored to the candidate's strengths.
Technical Development may include:
- Agent architecture design with explainable reasoning components
- Protocol analysis and privacy-preserving configuration generation
- Browser extension or middleware prototype development
- Integration with existing consent management platforms
Human-Centred Research will consider:
- Participatory design studies with diverse user populations
- Longitudinal evaluation of agent-mediated privacy decisions
- Trust and agency assessment frameworks
- Accessibility and inclusion-focused design iterations
Interdisciplinary Analysis may include GDPR compliance verification and regulatory alignment.
What We Are Looking For
We are looking for a motivated, self-funded PhD candidate with:
- A strong background in Computer Science or a closely related discipline
- Interest in privacy, security, AI, and human-centred computing
- Willingness to engage with interdisciplinary perspectives
Particularly valuable backgrounds include:
- Web security and authentication protocols
- AI/ML with focus on transparency or explainability
- User experience research in security-sensitive contexts
- Policy or regulatory analysis in digital rights
- Accessibility and inclusive design
Strong candidates will demonstrate curiosity about socio-technical challenges and commitment to research that serves diverse populations.
Supervision and Research Environment
The PhD will be supervised within Brunel's Computer Science for Social Good Research Group and the Centre for Artificial Intelligence: Social and Digital Innovation. The student will benefit from:
- Expertise in privacy-preserving protocols, IoT security, and AI governance
- Engagement opportunities with EPSRC research networks (e.g., SPRITE)
- Connections to standards bodies (IETF) and policy communities
- Collaborative environment emphasising responsible innovation and social impact
- Opportunities for interdisciplinary engagement across CS, HCI, and digital regulation
The project draws on supervisory expertise in:
- Privacy-preserving protocol design and standardisation
- AI governance and responsible AI deployment
- Secure authentication and authorisation mechanisms
- Human-centred system design for social good
The supervisory approach emphasises student agency, iterative feedback, and professional development beyond technical contributions, including publication strategies, networking, and career preparation.
Funding and Practicalities
This is a self-funded PhD position. Applicants should ensure they can cover tuition fees and living costs for the duration of the programme. Informal enquiries are strongly encouraged prior to application to discuss project fit, expectations, and potential refinements to the research questions. Please contact Dr Cigdem Sengul (Cigdem.Sengul@brunel.ac.uk).
How to apply
If you are interested in applying for the above PhD topic please follow the steps below:
- Contact the supervisor by email or phone to discuss your interest and find out if you would be suitable. Supervisor details can be found on this topic page. The supervisor will guide you in developing the topic-specific research proposal, which will form part of your application.
- Click on the "Apply here" button on this page and you will be taken to the relevant PhD course page, where you can apply using an online application.
- Complete the online application indicating your selected supervisor and include the research proposal for the topic you have selected.
Good luck!
This is a self-funded topic
Brunel offers a number of funding options to research students that help cover the cost of their tuition fees, contribute to living expenses or both. The UK Government is also offering Doctoral Student Loans for eligible students, and there is some funding available through the Research Councils. Many of our international students benefit from funding provided by their governments or employers. Brunel alumni enjoy tuition fee discounts of 15%.
