*Cleans cobwebs hahaha* 😄
It has been more than three years since I last published anything on here. A lot has happened! So let’s do some catch-up first. I got my masters degree in International Technology Law from Vrije University Amsterdam in 2023. Special appreciation to everyone who helped make the dream a reality. After the Netherlands, I moved to the UK and worked for 16 months at Queen’s University Belfast in the School of Electrical Electronics Engineering and Computer Science as a privacy policy technician. Actually, my office was in the School of Law as I joined an interdisciplinary project called Shaping the Metaverse.
Now back to today’s blog. A bit of my work at QUB involved artificial intelligence, although from the angle of cybersecurity and privacy. So for the past couple of years, I have been immersing myself in these important topics, but mostly from an academic base. In the coming months, I wish to get more practical and industry-focused. To facilitate this, I signed up for a few programs, both paid and free, and will be sharing my notes as I work through the resources. Actually, I take my notes by hand, with pen and paper. But then, I realized that unless I was also preparing for an exam, I never really went back to read my notes. As I thought about this, I got the idea to publish my notes as blogs. Doing this means I get to read my notes again. Hopefully, this will be the first of many I will be publishing.
Considering my areas of interest, please feel free to share any resources you think I should look at. Thank you in Advance!
I think it was 3 weeks ago when one of the facilitators for the 2025 Massive Open Online Course (MOOC) in Artificial Intelligence (AI) by Law Society Ireland made a post about it that I first heard of the course. Fortunately, it was still open and I signed up. It closes on 19 August 2025.
DISCLAIMER: Nothing in my notes should be considered legal or professional advice. All copyrights belong to their owners; I would only be summarizing what stood out to me, and also sometimes sharing my thoughts. If you are mentioned and would not like to be named or something, just each out to me, and I will delete or take down as appropriate. Reading my notes should not replace working through the resources themselves, but hopefully, they will help you retain knowledge and close gaps.

William Frye – Global Landscape of AI
Building somewhat on Moore’s Law, there is Amara’s Law, by Roy Amara, which states that “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” This is extremely true for AI. Europe is famous for the Brussels effect, e.g, the GDPR, where the standards are set so high that it likely covers any other set by other regulations. However, the EU AI Act did not go this way. The main reason for this is the arms race for AI, because AI has significant national security considerations. Also, the US took a deregulation approach, where one of the first things the Trump Administration did was remove Biden’s executive order on AI Safety and AI use at the Federal level. Trump also sought to place a 10-year ban on any AI regulations at the state and local level. A 99-1 vote on July 1 in the Senate has halted the proposed ban. For now. Things could pick up again at the House of Representatives.
A key global consideration is energy demands. For data centers, for model training, for inference (AI output). It is estimated that we will need twice as much energy by 2026. It then raises the question of climate change and water shortages, as it is needed for cooling. Where will the energy come from? Fossils? Nuclear power plants take time to build. Massive infrastructure projects are needed. Military use cases of AI are currently unregulated. Another consideration is strategic dependencies and the supply chain. It is frail and fragile.
David Kerrigan – What is AI?
Be aware of AI’s growing capabilities. A good example is how earlier models created human images having more than 10 fingers or toes. But today’s models can generate realistic videos, making it harder to distinguish what is real.
One concern with genAI use has been the risk of employees feeding models proprietary data. That makes me wonder if Retrieval Augmented Generation (RAG) is a secure way to feed LLMs proprietary data? Again, early genAI models could only do texts; today we have multi-modal, beyond just texts: videos, images, code, audios, etc. For AI, it’s the mindset; it’s not just about the technology.
Internal AI governance looks at policies, procedures, and processes about the organizational use of AI. External regulations are laws such as the EU AI Act.
Donna O’Leary – AI Literacy
AI literacy is required under Article 4 of the EU AI Act. AI actors in the AI value chain include providers, importers, distributors, deployers (users), manufacturers, and intermediaries. The EU AI Act is risk-based and regulates two distinct concepts: AI systems and General Purpose AI (GPAI) models.
AI literacy is mandatory and in force since 2 February 2025. It applies to all businesses that deploy or provide AI systems. The requirement is to have a sufficient level of AI literacy. Training should be context and role-specific. The only exception is use in a personal, non-professional capacity. Sufficient means understand how AI works, make informed decisions, use AI responsibly and effectively, recognise its benefits, and be aware of the risks and potential harm.
In complying with this requirement, the following steps were suggested:
- Audit: What AI tech is in use? What’s the tech stack?
- Assess: After identifying what products are in use, assess their level of risk. The Act classifies AI under prohibited, high, limited, and minimal risk. For example, ChatGPT is considered limited or minimal rosk.
- Design: In designing the AI literacy program, consider skills, knowledge, education, role, and context. For instance, in designing an AI literacy program for lawyers, there should be lessons on professional obligations for lawyers when using AI tools.
- Monitor: AI moves fast, so monitor developments and stay updated. At the moment, compliance with the requirement is by self-certification.
- Maintain: AI literacy is not a one-off effort; there will be a need to ensure those who need the training can access it and that training content remains relevant.
AI has been around since the 1950s. The difference between what we have today and the previous years is our ability to interact with these systems. GenAI works based on patterns learned from existing data and the mathematical relationship between words (known as tokens). There are public and private AI models. Most workplaces ban public models.

Kevin Neary – Everyday Skills and Future Skills for AI
Think of agents in terms of delegation, monitoring. New ways of thinking, not just new tools. One key skill in the AI age is critical thinking; spotting flaws, asking the right questions, challenging outputs, confirming relevance, and close reading by analysing every word and punctuation in AI output.
Another important skill for the AI age is systems thinking. Looking at the bigger picture, how tasks connect, and feedback loops. One little assignment to immediately apply this is to take one legal workflow and think about how AI fits in.
The last skill Kevin talked about was responsible thinking. This is where we think about ethics, fairness, and the legal impact of the use of AI.
David Kerrigan – Practical Session Using AI
This touched on how to prompt better. Better questions = better answers. A simple framework to prompt genAI better is to state the What, Why, and How. Be clear and detailed. And even after getting an initial, usually general response, guide genAI properly to try harder to reduce hallucinations. David introduced the concept of zero shots vs few shots. Zero shots often for genAI which is good at certain types of use cases, while few shots are for when using a genAI for use cases it generally doesn’t know. In that case, make sure to include examples for things you want it to output. Meta-prompting is asking genAI for better prompts.
To recap, David says if you want good outputs, you need great prompts. Define the task, provide context, the persona to deliver the result, and ask AI to try again if you do not get the results you want.
Conclusion
When I started, I did not realize this would get as long as it has. I have decided to split the notes into five parts based on the modules. There were five modules, so there will be five parts for the notes for this course. I enjoyed taking the course as it crystallized some of my thinking around AI and legal considerations. It also solidified my intention to dig deeper into data privacy and security for AI. On that front, following my recent work at QUB, I have been thinking a lot about protocols for machine unlearning that can help operationalize the right to forget in AI systems post data-training. Another thing I am thinking about is the accessibility of AI. While there are many benefits of AI in assistive technology, are we also considering how it can make accessibility harder? For example, a popular AI Agent failed the Tab Test, which is a basic accessibility test to see if a site is navigable with just a keyboard, a vital need for those with motor disabilities. I am excited to get back into prolific writing again. Stay tuned for more.

Leave a Reply