Week 05: 3D scanning and printing

Another exciting week at FabAcademy! This week, I was fairly familiar with the topic of 3D printing and a bit less with 3D scanning. Throughout the session I started having some ideas of what I wanted to do.

This week though, I needed to attend the session recording since I was only able to join the global open time to show what I did last week, specifically Project 01 and Project 06.

The reason was me going to Milan for the winter Olympics :)

Me at the winter Olympics in Milan

Jump to this week's checklist


Group Assignment: Testing design rules for our 3D printer

Essentially, this week's group assignment was to test how good our printers are. Thus, Carlos and I used an open-source testing print that tests for: overhangs, fine features, flow control, bridging, negative feature resolution, XY ringing, z-axis alignment, and dimensional accuracy.

3D printer test model in slicer

Carlos tested the Prusa i3 MK3s printed, and honestly, it performed poorly on most of the mentioned metrics. It was acceptable for the overhangs, worse for the bridging, very bad for fine features (resulting in lots of stringing), as can be seen below:

Prusa test print front view

Prusa test print side view

The goal of doing these tests is to see how far can we push our FDM printers. Can they print fine details well? Can we rely on them to require less support (thus, saving filament)?

I had previous experience with 3D printing, and I knew the Bambu Labs A1 mini printer would perform much better, but needed to prove it, lol…

I downloaded the same file, and imported it into Bambu Studio since it had all the properties of the printer built-in.

Test model loaded in Bambu Studio

As I expected, the print was much cleaner, and took about half the time compared to the Prusa.

Bambu A1 mini test print result

While this sounded like a pretty straightforward comparison, there are some additional elements to consider when 3D printing, and you might need to change them (among other factors) depending on your print: filament type, support type, and infill pattern.

Filament, Support, and Infill Comparison

Three variables that'll make or break a print depending on what you're going for. Here's a quick reference for each:

Filament Types

Not all filament is the same, and picking the wrong one can mean a brittle prototype or a print that warps off the bed halfway through. The most important ones to know for FDM:

Filament Strength Flexibility Print Temp Moisture Sensitive Best For
PLA Medium Rigid 190–220°C Low General prototypes — easiest to print, biodegradable, but goes brittle over time
PETG Medium-High Slight flex 230–250°C Medium Functional parts — great middle ground between PLA and ABS
ABS High Medium 230–250°C Low Durable parts — strong, but needs an enclosure or it warps
ASA High Medium 240–260°C Low Outdoor/UV-resistant — basically ABS but actually handles sunlight
TPU Medium Very flexible 220–240°C Medium Flexible parts, gaskets, anything that needs to bend without snapping
Nylon Very High Medium 240–270°C Very High High-stress mechanical parts — ridiculously strong but needs completely dry filament

Worth noting: PLA and PETG both emit particles during printing, so ventilation isn't optional regardless of which you go with.

Support Types

Supports are mainly about the tradeoff between material waste, removal difficulty, and how clean the underside of your print looks. The goal is always to use as little as possible.

Support Type Material Used Removal Ease Under-Surface Quality Best For
Normal (Linear) High Medium Fair Simple flat overhangs — fast to slice, messy to remove
Tree Lower Easier Good Organic shapes, minimizing contact points with the model
Organic Medium Easy Good Detailed models where surface marks actually matter
None (bridging) None N/A Best Short spans under ~60mm and angles under 45° — just bridge it, skip the support entirely

The big win with tree supports is how much less material they use and how much cleaner the removal is. Normal supports are faster to generate but you'll spend more time cleaning up the surface afterward.

Infill Patterns

Infill is one of the more interesting decisions — you're basically choosing a microstructure for the inside of your part. Percentage matters too: 15–20% is fine for most things, go 40%+ if you actually need structural strength.

Pattern Strength Print Speed Material Use Best For
Lines Low Fast Low Non-structural parts, decorative prints
Grid Medium Medium Medium General purpose — solid default if you're unsure
Triangles Medium-High Medium Medium Better horizontal load resistance than grid
Cubic High Medium Medium Good all-around structural choice, strong in multiple directions
Honeycomb High Slower High High strength-to-weight ratio, distributes load really well
Gyroid High Medium Medium Isotropic strength (same in every direction) — best when you don't know where loads will come from
Concentric Low Fast Low Flexible prints or when you want clean top layers

Gyroid is honestly underrated — it performs consistently across all axes, which is useful when you're not sure how a part will actually be stressed in use.


Learning from Global Session and AI Recitation

Global Session

This week's session was given by Adrian Bowyer himself, alongside Ohad. If that name doesn't ring a bell, it should.

Adrian is the person who created the RepRap project: a self-replicating 3D printer that could print most of its own parts. The whole idea was a machine that makes things to make more machines: an open-source fabrication chain reaction.

Before RepRap, desktop 3D printing basically didn't exist for normal people; it was locked behind expensive commercial systems. Adrian's work is the direct ancestor of every cheap FDM printer on the market today. Having him teach the session was genuinely cool.

As he pointed out himself, you know you've made something that matters when it ends up in a museum. The original RepRap is there now.

RepRap project overview from global session

History of 3D Printing

3D printing conceptually goes back to the 1930s, even if it didn't look anything like what we use now. David Jones created what's considered the first modern 3D printer.

Early 3D printing history

Charles Hull then commercialized stereolithography in 1984, using UV lasers to cure photopolymers layer by layer.

Charles Hull and SLA commercialization

FDM came along in 1988, and the origin story is kind of funny: it started as a modified glue gun. Someone's kid apparently asked for it, and that's what we got. Then, Adrian took it and made it self-replicating and open-source through RepRap.

How 3D scanning works?

The scanning section was new territory for me. The touch probe was one of the earliest ways to digitize physical objects. You physically move it across a surface and record coordinates.

There's a subtle but important detail: detecting when the probe breaks contact is more accurate than detecting when it makes contact.

The general scanning pipeline is:

  1. Gather images from multiple angles (the easy part)
  2. Point correspondence, matching the same feature across different views (this is the hard part)
  3. Generate a 3D point cloud (easy once step 2 is solved)
  4. Mesh/triangulate the cloud into a solid surface (also hard)

You can cheat step 2 by using structured light: project a known pattern onto the object, so the algorithm already knows where it's looking.

For meshing, Poisson Surface Reconstruction is a common approach, but it needs surface normals (vectors pointing perpendicular to the surface) to work properly. Triangles are the standard polygon for meshes because they almost always stay flat — quads don't guarantee that.

What I didn't know: modern AR devices use time-of-flight sensors with picosecond timing to do real-time scanning. That tiny sensor in iPhones is doing this, a form called LiDAR.

You can simplify the scanning setup by either moving the camera around a stationary object, or rotating the object in front of a fixed camera, either works.

3D scanning pipeline diagram

3D printing vs milling

The comparison between additive and subtractive was one of the more practically useful things from the session. 3D printing wins when:

  • You need internal geometry, milling literally can't get inside an object
  • Complexity is basically free, a complicated shape takes the same print time as a simple one
  • You want minimal material waste
  • You don't have a rigid enough setup for the cutting forces that milling requires

3D printing also only needs 2 axes of motion for standard shapes, since everything above the current layer is open. Milling with complex shapes needs 5-axis to avoid collisions. The downside of printing: anisotropy (horizontal is strong, vertical is weak), slow speed, holes that come out slightly undersized, and not the best surface finish.

FFF/FDM details

Most of what we'll use is FFF, the open-source name for what Stratasys trademarked as FDM. The process melts filament and extrudes it layer by layer.

PLA and PETG are the main materials. PLA has titanium dioxide in it, and both materials emit particles during printing that can be harmful, ventilation is not optional. You also need to store filament dry, since moisture ruins it.

Things to keep in mind:

  • Overhangs past around 45 degrees need support material
  • Bridges (horizontal spans with nothing below) can work without support as long as they're not too long
  • Infill doesn't need to be 100% (usually isn't). There are different patterns and percentages, each with different strength characteristics. Cylindrical hollow channels can even act like fiberglass reinforcement
  • You can pause a print mid-way and insert components (magnets, nuts, electronics) before resuming.
  • Bed leveling is never perfect, but you can map the surface and dynamically offset the head to compensate for inconsistencies

One clever trick: you can print with FFF and then fill the part with resin to get a denser hybrid material. The Minkawski sum is the mathematical operation used to offset shapes when fitting printed parts together, since prints tend to come out slightly undersized.

GCODE is technically a readable text format, but the note from the session was: don't edit it yourself. An interesting idea though — using an AI like Codex to generate GCODE directly.

Thermosets vs thermoplastics, and resin printing

Everything above was thermoplastic — materials that melt and re-solidify, meaning they're reusable. Thermosets are different: they harden through a chemical reaction and can't be re-melted.

Resin printing (SLA/DLP) uses thermosets. DLP is pixel/voxel-based — the projector cures a whole layer at once. Layer heights are much smaller than FDM, which gives better detail. Key parameters for resin: exposure time, bottom layer count (how many layers are used to stick to the build plate), and lift distances between layers.

AI Recitation

The AI recitation this week was run by a student instructor and was specifically about how AI is changing the fabrication pipeline. Given how much I already use AI tools, this was a good reality check on what's actually possible right now vs. what's coming.

The session opened with this idea of "How to Make (Almost) Anything (Almost) Without Making Anything?" The premise being that AI can now handle the full chain: text description → images → 3D models → CAD → G-code → embedded code.

This wasn't hypothetical; they showed actual tools doing each of these steps. The research traces back to student projects from Neil's 2022/2023 classes, and the progress in just a couple of years is kind of insane.

3D AI models worth knowing

There's a new generation of models specifically for generating 3D geometry and textures from text or images:

  • Hunyuan3D 2.0/3.0 (Tencent, 2025): Two-stage pipeline: geometry model first, then a separate texture model. v3.0 "Omni" adds control via point clouds, skeletons, and bounding boxes. Generates in 10–25 seconds and apparently outperforms most alternatives on benchmarks
  • TRELLIS (Microsoft, 2025): 4B-parameter transformer, 1536 resolution with 4K PBR textures. Uses "Structured 3D Latents" (a compressed intermediate format for 3D)
  • Meshy AI: Described as the most production-ready consumer tool right now. Handles bulk generation, auto-rigging for animation, multi-language prompts
  • Tripo / TripoSG: Fast iteration, auto-rigging, free tier available (good for quick concept exploration)
  • Meta WorldGen (Nov 2025): Generates entire navigable 50×50m 3D environments from text. Exports to Unity/Unreal. This one is wild
  • Genie 3 (DeepMind, 2025): Playable 3D environments from a single image — basically an instant game level from a sketch

LLMs for CAD

This was probably the part I found most relevant to what I'm doing. There's actual research on using language models to write parametric CAD:

  • CAD-Llama: Fine-tuned to output structured CAD operations: sketch → extrude → fillet sequences. It describes shapes hierarchically and then generates the operation chain
  • CADCodeVerify: A self-correcting loop, the model generates CAD, renders it, looks at its own output, checks if it matches the intent, and then fixes it. That's a clever approach to the hallucination problem in CAD generation
  • CADialogue: Talk to your CAD software through text, speech, images, or by clicking geometry. Generates real executable macros within Fusion 360 or Rhino
  • LLMto3D: Multi-agent setup, Agent 1 breaks down the description, Agent 2 writes geometry code, Agent 3 assembles and adds parametric sliders. Outputs directly printable files

Mods + Claude

One cool demo was Mods: the node-based visual programming environment used in Fab Academy for generating toolpaths, but with Claude integrated directly into it.

You could ask it in plain language how to mill a PCB on a specific machine, and it would navigate the node graph, identify the right modules, and set the correct parameters.

Mods + Claude integration demo

That's the kind of thing that removes a massive amount of friction for people new to the workflow. I could see this being huge.

OpenSpec / OPSX workflow and Humans in the Loop

There was also a presentation on a structured AI development workflow called OpenSpec (OPSX). The cycle is: New → Plan → Design → Tasks → Apply. You give it an intent, it builds out specs and a design document, breaks work into actionable tasks, then implements.

OpenSpec OPSX workflow diagram

The "Humans in the Loop" framing that came with it made a lot of sense: when the agent is confident, it acts. When it's uncertain, it pauses and asks. The human becomes a quality gate rather than doing every step manually.

One note from the session that I think is worth keeping: persist lessons learned in a dedicated file like CLAUDE.md or kaizen.md so the agent doesn't repeat the same mistakes across sessions.

Things I want to follow up on

A few things got flagged during this recitation that I want to actually try:

SenseCraft — for putting a trained ML model directly onto the XIAO MCU (relevant for my glasses project)

Sam3D — generates 3D buildings from small images, interesting for scanning workflows

Experiment with hardcoding AI inference directly into the glasses pipeline


3D Design and Printing

This week I 3D designed and printed 2 things, one for this week's assignment, and the other contributes to my final project: a möbius strip with a honeycomb lattice structure and a hanger for the XIAO Grove Shield on my glasses for testing AR display.

Möbius Strip with a Honeycomb Lattice Structure (FDM + Resin Printing)

First, what is a Möbius strip?

What is a Möbius strip?

A Möbius strip is a surface with only one side and one edge, created by twisting a strip of paper and joining the ends.

Möbius strip concept illustration

How did I come up with the idea? From Avengers Endgame, where the characters traveled through time using a time travel theory based on the möbius strip.

Time travel theory

In time travel theories, it represents a non-orientable, closed timelike curve where traveling along the loop allows returning to the past without hitting a boundary.

Following the futuristics theme, I wanted to generate the design using AI. Initially, the idea was just to create a möbius strip and print it, but I thought why not make it a harder challenge?

Another fascinating widely naturally occurring structure is the "honeycomb" structure. Famously built by bees, they represent a pinnacle of natural engineering, combining extreme material efficiency, structural strength, and mathematical precision.

Honeycomb structure reference

My idea ended up being a möbius strip with a honeycomb lattice structure as its "building blocks."

AI makes sense for such an application since most of those designs heavily rely on predictable math that can be changed and used to generate different shapes. For this application, LLMs like Codex-5.3 would write code to create an STL that would be accepted by printers' softwares.

I experimented with a few prompts, here's one that worked:

Generation Prompt

Generate a 3D-printable STL of a Möbius strip made from a single-layer honeycomb lattice structure (the honeycomb struts should form the strip itself, not a solid strip with a honeycomb pattern cut through it). Use Python with numpy, trimesh, and manifold3d for the geometry unions/booleans. The lattice should look organic/rounded and be printable, with slightly thin struts (~2.4mm). The honeycomb should tile seamlessly across the Möbius twist with no ugly seam at the join. Add a small attached rectangular plaque with engraved text that says Mobius Strip (readable and not mirrored), and include it in the same final mesh. Put all the key dimensions (radius, width, strip thickness, hex cell size, strut diameter, plaque dimensions/text settings) as variables at the top so I can tweak them. Verify the final mesh is watertight before exporting as mobius_honeycomb.stl. Also add a preview flag so I can visualize it before exporting.

I used ChatGPT's Codex-5.3 model to generate the structure. It produced the following Python code that generated the STL I imported to Bambu Studio software.

Download mobius_honeycomb.py

Möbius honeycomb in Bambu Studio - support view

One interesting aspect of this print is the amount of support it needed, rotating it in different ways would result in different support (ofc, I used tree support).

Möbius honeycomb in Bambu Studio - orientation view

The print was done fast, but while removing the support some parts broke off, so I used one of our cheaper soldering tools to put the broken parts back together.

Möbius strip print - broken support removal

Möbius strip print - repaired with soldering tool

Otherwise, the soldered parts are indistinguishable from the rest :D

Why additive not subtractive manufacturing for this design?

Because this Möbius honeycomb lattice has curved, intertwined, hard-to-reach internal struts and undercuts, so subtractive machining can't access it cleanly, while additive can print it as one piece directly.

Since this design had a bunch of fine details and a challenging structure, I wanted to try printing this using Resin printing. The print isn't complete yet, so here is a pic meanwhile!

We are using the Geeky 2 Resin printer.

Möbius strip in resin printer

Glasses Hangers for the XIAO Grove Shield

To progress on my final project, I designed a hanger for the XIAO Grove Shield I built last week to hang that on my glasses. While that isn't directly tied with the final form factor of my glasses, it is a crucial component for my current testing as I get the additional components I need.

I will be building on top of this over the next few days to have a full AR display connected to my current glasses (even while not having all the lenses I need for a compact form factor :)

First, here's how the AR display works. I need to flip the display since that's how the prism refracts light:

AR display optics diagram - flipped display

An important aspect of this is the concept of vergence and how the human eye works. Currently, I only have the refraction prism and the screen, so for the display to be seen clearly, the light from it needs to travel enough distance before it hits my eye.

With only those 2 components, the distance required makes the system a bit bulky based on the calculations:

Optics calculation - display distance

Optics diagram - vergence and eye distance

Bulky meaning that the display needs to be at a longer distance extending from the temple of the glasses. Later on, this will be combated by using a convex lens and a mirror.

Anyways, to attach the XIAO ESP32-S3 and the display to my current glasses, I started by measuring the dimensions of my glasses to hang the XIAO Grove Shield on the temples.

Spiral Development

Later on, the processing and computing will be done on my custom PCB, this is just part of the spiral development of the project as I acquire the necessary skills.

Measuring glasses temple dimensions

Then, I went to Fusion to design the hangers and imported a 3D design of the Grove Shield, and designed around it.

Honestly, this time around, my work was much faster as I started to get used to Fusion's interface. A big step up from the past weeks :)

Hanger design in Fusion 360

Then, I got to printing and I oriented everything properly using Bambu Studio's cut and orient tool, but I forgot an important thing in this initial design.

Hanger oriented in Bambu Studio

I DID NOT CHAMFER MY DESIGN. Thus, the hangers didn't fit the XIAO's Grove Shield properly, so I edited the design and reprinted it :)

Revised hanger fitting on Grove Shield

It hangs properly! I will continue working on this design to add the AR display and my refraction prism. Check my development log for updates on that!


3D Scanning

Andre, our instructor, brought his Kinect for us to try, but it was very unsupported on MacOS. If you have one, you could try downloading Skanect:

Skanect software interface

Instead, I used Polycam on my phone to scan my Meta Rayban Glasses. First, I took a ton of pictures from varying angles of the glasses on a neutral background.

Polycam photogrammetry scan setup

Then, it processed it, and honestly I wasn't so happy with the result, but I wanted to try something different. Thus, didn't spend so much time here :)

How Polycam works

Polycam uses photogrammetry — it takes a series of overlapping photos from different angles and reconstructs a 3D mesh by finding matching features across frames. On LiDAR-capable devices (newer iPhones/iPads), it can layer in depth data to make scans faster and more accurate. In my case, it was purely photo-based.

Polycam scan result of Meta Rayban Glasses

Scanning Our FabLab Using Hyperscape

Hyperscape is a scanning app for Meta Quest that captures environments using Gaussian Splatting, a completely different approach than traditional mesh reconstruction.

Instead of building a polygon mesh out of triangles, it represents the whole space as millions of tiny 3D ellipsoids ("splats"), each with its own position, color, size, and opacity.

The result is a photorealistic capture that holds up visually because it's reconstructing how light appeared in the scene, not just the geometry.

Gaussian Splatting explained

Gaussian Splatting trains on a set of images from multiple angles, then optimizes millions of point-like primitives (splats) in 3D space to match what each camera saw. No triangles, no mesh, just splats. Renders fast and looks great, but the output can't go directly into CAD or mesh-based workflows since there's no actual geometry underneath.

Gaussian splatting concept diagram

The first step was to open up the Hyperscape Capture (beta) app on my Oculus.

Beta limitation

While I am on its beta, I am not on the version that enables sharing 3D scanned spaces with others. It is still slowly rolling out.

Then, move slowly around my FabLab to scan it fully.

Scanning the FabLab with Meta Quest - view 1

Scanning the FabLab with Meta Quest - view 2

Once it is done, I needed to upload it. Fun fact, since our FabLab is in a basement and my Oculus didn't connect to the WiFi, I need to go outside and upload everything, looking like a GEEK.

Uploading the scan outside the FabLab

A few hours later after processing was done, here are the results:

FabLab Gaussian splat result - view 1

FabLab Gaussian splat result - view 2

FabLab Gaussian splat result - view 3

FabLab Gaussian splat result - view 4

Here's a video of our lab. This requires high-quality internet, so the video doesn't look as good as the images since the internet wasn't that great at the time!

Here are some funny reactions I got when I shared the pics.

Funny reactions to the FabLab scan

When I shared it WITH OUR INSTRUCTOR, he thought it was a real picture!!

Export limitation

You can't really export 3D scans from Hyperscape and it is pretty difficult to meaningfully export 3D room meshes. I tried this application, but the mesh wasn't good.

Hyperscape is pretty cool, once I get access to the version I can share a world with, I will post the link here!


Original Design Files

Python script for generating möbius strip honeycomb structure:

Download mobius_honeycomb.py

3D printing testing file:

Download ksr_fdmtest_v4.stl

XIAO Grove Shield Glasses Mount:

Download glasses_mount.stl


This week's checklist

  • Linked to the group assignment page
  • Explained what you learned from testing the 3D printers
  • Documented how you designed and 3D printed your object and explained why it could not be easily made subtractively
  • Documented how you scanned an object
  • Included your original design files for 3D printing
  • Included your hero shots