Yes — I use AI.

I use AI both in my FabAcademy work and in my everyday life, and I’d like to briefly share my thoughts on the topic so my intentions and reflections don’t get lost.

To begin with, I don’t use AI just to say “wow, that’s a lot — AI, do it and done!” Instead, I see AI as a tool to improve workflows and take over tasks that I can do myself but that take a lot of time. The time I save through this, I don’t want to waste — I want to use it to dive even deeper into the subject and engage more thoroughly with what I’m doing.

Now, AI already can do so much that one could argue it might harm jobs in the long run — and I get that. I understand that concern, and I’m not happy about it either. There are things I’m personally sad that I can no longer do myself. I’ve worked extensively with media and spent most of my studies creating digital content, and manual workflows often remain the more time‑consuming and labor‑intensive option by a significant factor — to the point where doing things by hand no longer makes practical sense.

That makes me somewhat sad, and it’s part of why I chose to pursue my master’s. I find efficiency to be a very important part of work and I struggle with workflows when I know I could reach a similar quality outcome in much less time, especially when there’s a lack of time — and let’s be honest, there certainly is in FabAcademy.

For these reasons, I use AI where I believe it costs little in quality but saves significant time.

At the same time, ignoring AI doesn’t make the topic go away. AI is already a daily part of work in 2026 and likely will remain so long‑term. As someone who is technically curious and comfortable with new technologies, and in order to stay competitive and continue learning, I want — and need — to work with AI for that reason as well.

There are many reasons why it’s important to have an understanding of AI and to be able to work with it — whether that’s prompts, current software, or other related topics. And even if AI doesn’t become as big a topic as it seems today, it will nevertheless remain a useful tool, as it already is now. It’s unlikely to go backward.

So if you read something on my site about AI, please don’t think it means I didn’t engage with the topic — rather, that I have engaged with it very deeply. I always look at the underlying mechanics and how to reach the goal, whether an example was created with AI or not.

Still, it’s important to me that content is clearly labeled. AI can be misleading or create uncertainty — and while I don’t think that applies strongly to what I post, having a consistent approach is important. That’s why you’ll see this icon across my entire site. Whenever you see it, you’ll know that the content was (mostly) generated with AI.

You’ll also be able to see the model used, the prompt, and, where applicable, additional settings like “Thinking.” Even though I personally value reproducibility of results, it’s the case that due to seeds, daily model updates, existing context, personal model settings, and even the way thinking models process information, outputs often differ from mine. The prompts are meant to act as a thought scaffold from start to goal and to provide as much transparency as possible — but they unfortunately cannot guarantee 100 % reproducibility of results.

Tools Used

Software and interfaces in my AI workflow

I mainly use AI in two areas: image workflows and code workflows. To keep this section easy to scan, I grouped the tools by what I actually do with them instead of mixing product names, interface types, and overlapping categories.

Images

ComfyUI

ComfyUI is an open‑source, node‑based generative AI interface that I use for visual workflows such as image generation and diffusion model pipelines. Instead of working through a single prompt box, it lets me connect modular nodes into custom processing graphs and save or share entire workflows.

That visual, modular structure makes more complex image generation transparent and reusable, which is especially useful when I want to understand how an output was built or return to a setup later.

Standard ChatGPT Image Generation

For quicker image tasks, ideation, or when I do not need a full node-based workflow, I also use standard ChatGPT image generation. It is useful when speed, convenience, and direct iteration matter more than building a reusable visual pipeline.

Texts

ChatGPT / Claude / other models

For text-focused tasks, I use ChatGPT, Claude, or other models mainly for spelling and grammar checks. In that context, AI is less about replacing the content itself and more about helping me polish wording, structure, readability, and consistency.

This is especially useful when I already know what I want to say but want a faster way to check clarity, phrasing, or language quality before publishing.

Code / CLI

Terminal-first coding workflows

For development work, I mostly use terminal-based AI tools. They bring AI directly into the coding workflow, which makes them useful for prototyping, editing, refactoring, project setup, Git work, and integrating AI into existing command-line based processes.

  • Codex CLI / Codex

    Codex is lightweight, open‑source, and easy to integrate into familiar terminal workflows. I mainly see it as a strong tool for quick edits, implementation work, automation, and cost-efficient routine coding tasks.

  • Claude Code

    Claude Code is especially useful when deeper reasoning, larger project context, or multi-step orchestration matter more. I use it where architectural understanding, refactoring, and navigating bigger multi-file systems become more important.

  • OpenCode

    OpenCode is interesting to me because of its multi-provider flexibility and local configuration. It fits well when I want a standalone terminal UI that can adapt to different models and workflows.

A typical code workflow can involve thinking through system structure with stronger reasoning tools, generating or refining implementation in Codex or OpenCode, and then integrating the result back into the normal development and CI/CD process.