Cyber Scotland Week has become a cornerstone of Scotland’s efforts to build a more cyber‑resilient nation. Each year it brings organisations, communities, and sectors together to share practical knowledge and strengthen collective awareness. The 2026 theme, “Can’t Hack it?!”, reinforces a simple truth: cyber resilience isn’t reserved for specialists. It’s something everyone can influence – whether you’re running a small business, delivering public services, or supporting your local community.
As part of our ongoing commitment to Scotland’s cyber and wider social wellbeing, we’ve also used Cyber Scotland Week as an opportunity to give back. Last year, our event raised £1,000 for The Beatson, supporting vital cancer care across Scotland. This year, we matched that donation with £1,000 for SAMH, helping to fund essential mental health support at a time when workplace pressures and digital complexity continue to rise.

It was within this national context that we hosted Secure AI: Strategies for Tomorrow. AI is transforming how organisations operate – speeding up workflows, reshaping decision‑making, and creating new opportunities for efficiency. But as adoption accelerates, many leadership teams find themselves facing challenges they have never encountered before. Shadow AI is emerging faster than governance frameworks. New legislation is expanding director‑level responsibilities. And AI‑driven risks are evolving at a pace that traditional approaches simply can’t match.
The purpose of our event was to give Scottish leaders a clear, practical space to understand what responsible AI adoption really requires. Secure AI centred on leadership behaviour, operational clarity, communication discipline, and the foundations that make organisations genuinely resilient.
Cyber Scotland Week brings organisations across the country together to build resilience, raise awareness, and support their communities — no matter their size or sector. Lugo has proudly taken part every year, and this year’s event reflected that spirit.
We created a space where SMEs, accountants, housing associations, third‑sector groups, regulators, and security professionals could come together to explore one shared theme: how to use AI safely, confidently, and strategically.
While technology will continue to evolve at extraordinary speed, the organisations that stay resilient will be the ones whose leaders act early, communicate clearly, and take ownership of AI governance before a crisis forces their hand.
This was the core theme that emerged from Secure AI: Strategies for Tomorrow, held on 25 February 2026 at Waverley Gate in Edinburgh. Across keynote sessions, live demonstrations, crisis simulations, and legislative insights, one message came through with absolute clarity: Secure AI is not about tools, it is about leadership discipline, operational maturity, and organisational readiness.
The future belongs not to the fastest adopters, but to the most prepared.

AI Adoption Has Accelerated Beyond Traditional Governance
The event had a powerful reminder from the Ingram Micro session: AI is scaling at a speed never before seen in technology evolution. Previous technology shifts moved in years; generative AI moved in months… or days.
- Mobile phones: 16 years to reach 100M users
- Internet: 7 years
- Facebook: 4.5 years
- ChatGPT: 3 months
- GPT4All: 1.5 days
Organisations are not simply facing a new technology, they are facing a new pace.
This exponential adoption curve confronts leadership teams with a critical challenge: AI is accelerating, but many organisations are not structurally or operationally prepared to manage it.
The result is a widening gap between AI’s capability and an organisation’s ability to govern, secure, and meaningfully control its use.
Shadow AI: A Growing Governance Problem, Not a User Problem
Attendee survey results echoed trends from the Microsoft Work Trend Index:
- 75% of employees want AI tools at work
- 70% of AI users admit to bringing their own tools (BYOAI)
- This behaviour spans all age groups, not just digital natives
Paired with infrastructure data that we gathered during the event showing:
- 48% use BYOD
- Only 36% have structured hardware refresh cycles
- 20% are unsure how device patching occurs
…it becomes clear that Shadow AI is no longer a niche risk – it is the default behaviour inside many SMEs and third‑sector organisations.
But Shadow AI is not driven by recklessness. It is driven by need.
When employees lack efficient, sanctioned tooling – especially in resource‑constrained organisations – they reach for solutions that help them work smarter and faster.
This means Shadow AI is, at its heart:
- a governance issue
- a culture issue
- a communications issue
- and a leadership issue
It is a symptom of organisational gaps, not employee intent.
Cyber Exercise Simulating a Ransomware Attack
Jude McCorry, CEO, Cyber and Fraud Centre Scotland
Jude McCorry’s session was one of the most impactful parts of the day. Drawing on years of real‑world experience in technology, crime prevention, and national cyber resilience, she led attendees through a live, interactive simulation of a ransomware incident targeting a small organisation with sensitive client data.
Her session delivered three critical leadership lessons:
Incident Response is Only Valuable if It Is Tested
An untested incident response plan (IRP) is no plan at all.
Far too many organisations have documents labelled “Incident Response” sitting untouched in SharePoint – but these plans collapse under pressure because:
- decision-making hierarchies are unclear
- contact lists are outdated
- backup procedures are untested
- third-party dependencies aren’t mapped
- executives have never rehearsed their role
Jude’s simulation showed attendees how quickly confusion, misalignment, and inconsistent messaging can escalate a technical issue into an organisational crisis. This highlights how crucial it is to regularly practise and rehearse your incident response plan. Only through realistic run-throughs can teams ensure they know their roles, refine communication channels, and address gaps in procedures, ultimately achieving optimal efficiency when a real incident occurs. Without such exercises, even the most comprehensive plans risk falling apart under pressure, leaving organisations vulnerable at critical moments.
Ransomware Affects Every Part of the Business, Not Just IT
The exercise demonstrated that ransomware impacts:
- Finance (payments, payroll, cash flow)
- HR (staff communication, wellbeing, trust)
- Operations (service continuity, customer impact)
- Legal (regulatory notifications)
- Communications (internal and external)
- Governance (board oversight and accountability)
Leaders saw firsthand that ransomware is not just an IT problem; it is an organisational event with business‑wide consequences.
Cyber Crimes Are Crimes – Police Scotland Should Be a Contact Point
One of Jude’s reminders was also one of the overlooked factors:
Cyber crimes are crimes – and the police remain one of the critical first point’s of contact.
Many organisations fail to notify Police Scotland during incidents, assuming the issue is too technical or IT‑specific. Jude emphasised that policing units work closely with NCSC, cyber incident hubs, insurers, and threat‑intelligence providers – and early engagement can improve outcomes.
Secure AI Starts With Control: Default Deny, Ringfencing, and Zero Trust
Eoin McGrath’s session provided leaders with crisp, jargon‑free clarity on why AI‑era security requires a shift away from trusting devices, users, or applications simply because they exist inside the perimeter.
The central message: In the age of AI‑driven attacks, the only sustainable defensive model is Zero Trust combined with a default‑deny approach.
This means:
- software must be explicitly allowed before it can run
- applications must be ringfenced (isolated)
- even approved applications should only access what they need
- AI tools cannot be assumed safe simply because they come from recognised vendors
This control‑first approach pairs perfectly with AI adoption:
AI increases productivity, while Zero Trust ensures that productivity happens within a secure boundary.
Media and Crisis Communications Session
The communications masterclass from Hannah Kennedy-Bardell reframed executives’ understanding of incident management entirely. Drawing on experience from Parliament, national crises, energy sector incidents, and international communications events, she took leaders through:
- early‑phase uncertainty
- how attackers manipulate the narrative
- why “no impact/no data lost” claims are dangerous
- the importance of empathy, clarity, pace, and consistency
- how to choose spokespeople and prepare them
- how to handle media, internal stakeholders, and social escalation
Her most important insights for leadership:
- Communications discipline is a security control.
- Staff become your loudest channel – intentionally or not.
- You will be judged on how you communicate more than the incident itself.
Her session placed communications squarely within the domain of risk management and resilience strategy, not PR.
Building Cyber Resilience: Cyber Essentials, Supply Chain & Response Planning
Liz Smith expanded the conversation beyond tooling and into governance, accountability, and regulatory expectations. Her session covered four major areas of legislation and leadership duty that executives cannot afford to ignore:
Artificial Intelligence (Regulation) Bill
This emerging bill sets out expectations for transparency, accountability, and governance over AI use.
Organisations must begin preparing for:
- AI accountability mapping
- documentation of AI use cases
- risk assessments
- oversight of automated decision‑making
Cyber Security and Resilience Bill
A step-change in how SMEs and critical supply‑chain partners must demonstrate:
- proactive cyber readiness
- resilience testing
- reporting obligations
- supply-chain assurance measures
This legislation raises expectations around:
- lawful data access
- auditability
- data retention
- cross‑organisational sharing
- user rights in AI‑infused environments
Liz made it clear: directors cannot delegate cyber governance.
AI risk, data protection, and operational resilience now form part of a director’s fiduciary and legal responsibility.
Her session grounded the event’s themes in real‑world regulatory obligations and reminded leaders that governance is not optional.
Across every session – technical, strategic, legislative, operational, and communications -one conclusion became unmistakable:
AI risk is not a future problem. It is a present operational reality. The gap is not technological, it is organisational.
The organisations best prepared for the AI era will be those whose leaders:
- treat AI as a governance challenge
- actively manage Shadow AI
- test their incident response plans regularly
- understand the legislative and regulatory landscape
- invest in communications capability
- simplify their operating environment
- automate controls
- build resilience before an incident occurs
The most powerful takeaway from the entire day?
Secure AI is not defined by your tools. It is defined by your leadership.
Unlocking Productivity and Security with Microsoft 365 Copilot
Michael Markey showcased how AI, when deployed responsibly, can deliver extraordinary productivity uplift – with:
- instant document drafting
- intelligent recap
- summarisation
- automated insights
- Secure SharePoint Search
- integrated Teams intelligence
- business‑specific connectors
- different Copilot SKUs
- and Microsoft’s Getting Started Offers and promotions
Copilot is not just a tool – it is a new way of working.
But it only works sustainably when combined with:
- Purview
- Zero Trust
- governance
- clarity
- leadership
What next for your organisation?
Across every session at Secure AI: Strategies for Tomorrow, one conclusion became unmistakable: AI risk is not a future problem. It is a present operational reality. The gap is not technological – it is organisational.
The organisations best prepared for the AI era will be those whose leaders take ownership now. Those who treat AI as a governance challenge, actively manage Shadow AI, test their incident response plans, understand their regulatory obligations, and build resilience before an incident occurs.
Secure AI is not defined by your tools. It is defined by your leadership.
If the themes in this article reflect conversations already happening within your organisation, there are several ways to continue the discussion.
If you would value a leadership‑level conversation about AI risk, cyber resilience, and governance and you’re ready to find out more about Cyber Essentials, you can book a time with a Cyber Advisor that suits you here.
The slides from Secure AI: Strategies for Tomorrow are also available to download as a PDF. They are designed to support board‑level and leadership discussions, covering Shadow AI, incident response, legislative expectations, and practical resilience planning.
For organisations looking for a practical starting point,
The most important decision is not which tool to adopt next, but whether leadership is ready to take ownership of secure AI now.
Lugo also offers a free cyber security eyesight report. This provides a high‑level, plain‑English view of where governance, control, and resilience gaps may exist. It is designed for decision‑makers who want clarity, not technical noise.












